Archinect

Cristina Cotruta

CO Design Studio

  • anchor

    MEMORIAE-Neural Network Data Sets in AI and Machine Learning

    Cristina Cotruta
    Jul 5, '24 1:30 PM EST
    The Severan Tondo
    Figure 1. The Severan Tondo, AD 199, depicting the portraits of Septimius Severus, Julia Domna and their sons Caracalla and Geta. The face of one of Geta has been erased, as a result of damnatio memoriae ordered by his brother Caracalla after assassinating him. Source: https://almarsad.co/en/2019/11/12/a-libyan-he- ro-septimius-severus/portrait-of-the-family-of-septimius-severus/

    DAMNATIO MEMORIAE

    The phrase damnatio memoriae – a ‘condemnation of memory’, modern in origin, captures a broad range of actions posthumously taken by the Roman government against former leaders, their authority and reputation. Most prevalent during the Republican and Imperial periods, this tactic encompassed the defacement of all visual depictions and literary records of a distinct individual the authorities decided to condemn. Damnatio memoriae was a severe punishment method and also a way to enforce dominance and interfere in the course of history, of the progression of things.

    The actual physical manifesto implied a collective act of expression which would erase the person from within the ‘memory’ of the society. Memory was seen as the ultimate long-range antagonist threat restraining new, desired changes. Memory was a tool for shaping a dissimilar, neoteric vision, an antithetic, altered, carefully induced truth. Cognition of a society was the omnipresent God, the corporeal invocation of power, both within and without. In the ancient times, memory equaled a vicious form of tyranny that translated to an infinite abyss of unexplored possibilities, unascertained power.

    The damning of memory phenomenon could be traced back to numerous historic records, such as The Cancelleria Reliefs (2) , which depicts Domitian’s military campaign, yet the head atop the body of the emperor is not that of Domitian but of Nerva, who succeeded Domitian after his assassination and subsequent damnatio memoriae. Or, The Severan Tondo (3) , a panel painting depicting the Severan dynasty. The face of Geta is deliberately erased, after his assassination by his brother, Caracalla, part of the damnatio memoriae. 

    Cancelleria Reliefs

    Figure 2. Cancelleria Reliefs, a set of two incomplete bas-reliefs, originally depicted events from the life and reign of Domitian, but were partially recarved following the accession of emperor Nerva Source: https://www.rome101.com/Cancel...

    ‘Infidels claim that the rule in the Library is not 
    “sense;’ but “non-sense;’ and that “rationality” (even humble, pure coherence) is an almost miraculous exception. They speak, I know, of “the feverish Library, whose random volumes constantly threaten to transmogrify into others, so that they affirm all things, deny all things, and confound and confuse all things, like some mad and hallucinating deity.’  (Jorge Luis Borges ‘The Library of Babel’)

    ‘By exploring the Umwelten of other organisms we add to our own a host of different worlds; and the richness of our own world will increase 
    in immeasurable ways. While physics is bound to impoverish the average thinker’s view of the world, biology will enrich it beyond measure.’ (Jakob von Uexkull ‘The Theory of Meaning’). Being able to freely manipulate and interpolate a person’s memory encompasses a series of prodigious possibilities. Historically, this came as a result of power and physical labor such as destroying monuments, erasing images and overall deforming the communal memory; it was condemned, a way of reshaping the narrative of the past, not completely drowning it. Memory was an Umwelt in itself, treated as a receptacle of power and if rendered to our current reality and modern technologies, memory could potentially fill in the role of a disparate world, characterized by behaviorism; a complex outer meaning-carrier. The subjective universes that exist within each person’s brain capability is immeasurable, with a huge promise of potential, in varied fields, which used ethically, may become new outlines that advance our knowledge, environments, perception of self-behavior, more than current tools ever could.

    WORKING MEMORY

    Every single thought that could potentially develop into short/long-term memory begins with sensory information. Our brain is able to capture it during a sensory register process, which is essentially a short activity lasting a few seconds, amid which visual and auditory cues passively appropriate the stimuli and transform it into thoughts.  

    Dickens’s Dream

    Figure 3. Dickens’s Dream, also known as ‘A Souvenir of Dickens’, oil painting by Victorian artist Robert William Bus. The painting fea-tures Dickens in his Gad’s Hill Place study, conjuring up his charac-ters while he sleeps. Source: https://artuk.org/discover/art...

    Essentially, human memory and as a result thought formation, comes from within a field of assorted layers which fuse or at least cohere, simultaneously working to avoid meaning blindness and inspire the individual to take one or another type of action. These create the physical world the individual inhabits, allowing for object formation as a result. Each single thought held in our memory holds the physical outcome of the environment around us. These layers begin to create our Umwelt before the actual physical manifestation of it. A new physical ambiance begins its existence in the initial sensory information the individual acquires from a visual/auditory or tactile stimulus.  So, all in all, what we get to observe with our eyes on a daily basis holds potential for what we later on could possibly carry on to a new, unexplored environment.

    The prospect of appropriation of these certain layers in their initial stages, before formation or even after, holds unparallel potentiality. Each of these layers is processed by one individual’s neural system to its best abilities and based on certain constraints, be that environmental, social, economic, etc. What if these layers could be captured and processed from a more technological point of view? What if future automation knowledge was to focus on the possibility of reading and refinement of these sensory ‘enzyme’ originators? Could this be the gateway to a different Umwelt creation? Within our brain, different parts serve as different information processors to create memories and subsequently store them. Each of them has varied functions associated with certain types of memories, therefore the process of reading the exact moment of recognition by a third party could potentially be problematic. However, biology does grant us hope. 

    Amygdala location and function

    Figure 4. Amygdala location and function. Source: https://www. thoughtco.com/amygdala-anatomy-373211

    The amygdala (7)  is the part of the brain located in the temporal lobes responsible for emotional and sentimental reply, which is then directly linked to memory formation. The process that happens within it deals and helps with long-term episodic memories. When certain information reaches this part of the brain, only a few seconds are needed for it to be passed into sensory memory. Once it has passed the sensory level, the individual already is capable of understanding the subject, form a response to the stimuli and pass it on for storage. It has already been processed. This operation is solely performed by our brain mechanism. It captures, reads, processes and then stores. Whichever idea we have about the environment, conclusions and reactions happens here. We more or less visually create a new idea by understanding and analyzing our existing environment. Human brain is a finite, biological system with limited possibilities as we now understand it. The prospect of having an outside augmented mechanism, able to collude with it might become the gate to a new world understanding. 

    At the moment, each individual contributes to our society by means of his/her world understanding based on his sensory abilities and memory formation in some shape or form. It is an involuntary act. But what if the world each individual perceives and stores on a daily basis could be interpreted not by the individual himself but with the help of a third party system, able to dissect the layers of biological memory formation, be that by reading each of them separately and uniting them as an end system at the cell level or by reading a part of these layers, ones more accessible to our current and developing technologies combined with visual/auditory data stimulation directly captured in the shape of images/sound waves. Would the understanding of the exterior entity become a pathway to divergent opportunity? Would this ‘alien’ form be able to reproduce and remodel what each of us holds dear within our biological ‘case’ we call our body, our memory, our past and transform it in transitional physical environment we could actually inhabit? Every single thought, event, place we have felt on an emotional level could potentially become the basis on which a  future dwelling environment is built, ‘other’ but also familiar.

    In ‘A Souvenir of Dickens’ (8) , an oil painting by Robert William Bus, the artist depicts Dickens as creating his world, his characters within a ‘dream’, his own ‘dream’. The ‘dream’ was present with him everywhere. It wasn’t a physical manifestation per se, technologies were lacking potentiality, however, from a metaphysical aspect, Dickens had his memory, sensory visual world always present within his corporeal existence and this is how he was able to create new Umwelts. It was through this two-world collision that a unique, dissimilar reality was able to protrude and exist. What if artificial intelligence was able to become the bridge between these worlds and physically manifest each of our ‘dream’ worlds within our current realism, sensibility and environment. Would this change the spaces we inhabit and thus continually vary our world understanding? Could this process potentially become a feedback loop for new genesis, antithetic from each other and in continuous transformation?

    LAYERED MEMORY

    In an experiment conducted with the scope of investigating the neural mechanism and emotional memory formation within the amygdala, scientists subjected 7 epilepsy patients to an episodic memory task with emotional stimuli. The responses from the amygdala were recorded by local field potentials. In short, the memory task consisted of two phases: an encoding phase, during which participants were rating emotionally a series of visual images and the retrieval phase, where the same images had to be rated again from an emotional stand point by the same people. The tools that were used to closely monitor and register these responses were computational mechanisms that could distinguish and visually record such components as oscillatory power, phase synchrony and phase-amplitude coupling. A classifier was later trained on the basis of high-frequency activity inputs to be able to predict the participants’ emotional response towards one or another image and see if it stayed consistent throughout the two phases.

    The possibility of acquiring data from brain activity and reading emotional responses of an individual towards an image, space, object, etc. he/she encounters in the daily life holds extensive promise. Feeding visual captures with heterogenous properties taken out of an architectural environment the individual experiences and mentally stores as long/short-term memory depository for example, to an AI mechanism is equivalent to the creation of a whole new set of space linguistics. AI processes information through a set of undefined tools in terms of direct observation, separate from a human being perception. But if the brain-computer interface was able to recreate a space or object within a space or even 2/3rds of an image the individual sees and further process it from a machine learning point of view, the initial environmental expansion would appropriate new meanings. Disciplinary typologies of a room, house, street known to us would be lost and new meanings could start to emerge. That would be the basis for a new 3d world opportunity or even virtual reality, one that would differ from the initial data being fed into the machine and one holding potential for new typological meanings. 

    Having a mechanism able to interpret yourself at an emotional level at the same time you experience an environment, situation or object would give you feedback upon your own perception. As an example, one might be able to observe a vase with flowers and visually reproduce its features: crisp lines, yellow in color oval shape, etc. in one way. The same object, viewed by machine learning systems would probably be perceived in an ontogenetic way. But the result is a new environment, it does hold the same initial meaning we were able to take from within the real world, store into our brain but it has acquired a different physical typology. It becomes a new object, a new visual stimulus, ready to be processed again and again in order to give new meanings.

    And the importance of a mechanism able to accompany individual users is vital. Each individual is unique. Each individual perceives spaces and events differently. For each individual, specific things are important and worthy of attention. The input (images, objects we remember, spaces we recall) is particular. Therefore, the output would also be distinct to that particular person. If I were able to take all my experiences, emotional responses I have attributed to them and feed the AI with my own sense of perception of my own world and as a response receive a new virtually modeled world of objects, colors, emotions on the basis of what I have encountered, I would obtain and procure a neoteric habitat exhibiting just that. The way this environment would manifest might vary. It could come in the shape of holographic, three-dimensional projection within my existing home for example, or an interior, a space, object I deal with or it could even be physically built with the help of future or even current technologies. I could choose how I wish that my own emotional perceptions of my own world start to build the thread needed to elevate a new linguistics of built typologies.

    IF YOU CAN DO IT, YOU CAN DO IT

    So, is it possible?

    Is brain-computer mechanism even viable?

    The latest experiment (11) conducted by three research teams on auditory perception and AI symbiosis was successful in turning data obtained from electrodes surgically placed on the brain into computer-generated speech. Using computational models such as neural networks, they reconstructed words and sentences, which were intelligible to human listeners. The researchers monitored parts of the brain which were responsible for sensory stimuli capture while people either read out loud, listened to recordings or mouthed speech. The information was fed into neural networks, which passed and analyzed the data through a series of complex layers of computational ‘nodes’ and allowed it to learn by understanding and adjusting the connections between these layers/nodes. The data fed within the network was varied, either actual brain responses to auditory stimuli, or speech or recordings the person produced or heard. The output were words from unseen brain activity.  40% of the words that were outputted by the AI were understandable.

    Capturing a visual image on the basis of visual stimuli holds the same potential and technical complications. But, dividing the process into layers of recognition is allowing for palpable results. In terms of generating a new environment on the basis of an existing neuro stimulation within an individual could manifest in two possibilities:

    The first one is to feed the neural network a data set consisting of images holding specific importance to the individual.
    In the following images, 3D scans of significant objects that trace the author’s memory path are showcased and translated into a visually accessible, digital environment. Metashape (12) , the software used to process the data from the images taken, makes use of the GPU, as well as AI. The addition of artificial intelligence allows for faster data processing. The process begins with the user importing a series of photos of a static object (scene, interior, etc.) into the appropriate software (Agisoft Metashape), he/she then establishes a varied set of measurements and inputs and then waits for a workable 3D model to be generated by the AI. Once complete, the model can be forwarded for additional layer processing.

    Figure 7 Image depicting how a set of image data is fed within the software (Agisoft Metashape) and further processed to create sparse, dense point clouds and 3D model as a final outcome. Source: author

    The next mechanism/neural network is responsible for processing the data obtained and the output of a new, divergent outcome of the image it has been fed with. Here is where the initial object, environment loses its typology and acquires new meaning. Placed within a physical setting, the new ‘something’ is dematerialized. It becomes a ‘future’ typology of an already known ‘entity’ which is alien however to the individual. The image obtained as an output through the AI process becomes an invaluable source of potential, as it holds a divergent Umwelt13 , that can manifest in one’s reality either as a holographic projection within the existing environment or built through digital tools that might be available in the future and actually become palpable. (13)

    Figure 9 Image depicting the outcome of GAN processed dense point clouds, initially obtained from Agisoft Metashape. Source: author


    Figure 10 Images depicting a series of second layer GAN pro-cessed dense point clouds obtained within Agisoft Metashape and placed within an existing, 3D interior environment. The result showcases the formation of new potential interior typologies that can be read and translated to workable 3D models. Source: author

    However, even though the process might seem finite and singular, it is not. A feedback loop is established between the user (which in this case is the partial generator of the outcome) and the neural network. The output attained and now visually manifested within the user’s real interface can again be perceived, understood by the stimuli receptors and as a result, taken through the same process that allowed for its initial formation again and again.

    The second possibility, and the more convoluted one, implies identification of brain activity on a series of levels within the amygdala and hippocampus as a first step. Reading each of the layers as oscillations, color patterns, etc. as a response to emotional valence requires a series of systems, each responsible for distinct tasks. Each algorithm is responsible for certain aspects of image, memory formation and each produces an output. In order to be able to unite and analyze each of these separate problem formulations as a whole, a complex system is needed.

    E2E or otherwise known as End-to-End14  learning ‘refers to training a possibly complex learning system represented by a single model (a Deep Neural Network) that represents the complete target system, bypassing the intermediate layers’ we have mentioned above. What Deep Neural Network represents in essence is an artificial neural network that consists of multiple layers between the input and the output. In order to be able to ‘read’ the image from sensory receptors present in an individual’s brain, the network must be trained on multiple layer sensibilities. Each of these being responsible for one specific aspect of what makes the input. What makes an image within the sensory receptors of our brain is dissected at a scale smaller than the image pixel, where color input is seen as binary data set, shape detected as vector coordinate in 3D space, all of which are detected at cell level, by the visual stimuli receptors etc. Subsequently, all the stimuli captured by the electrodes attached to this process are then translated into data, for the AI to assimilate into palpable information and actual data.  Similar to how the brain works, each DNN layer can be trained to specialize to perform intermediate tasks necessary to read the initial input and then as a whole, generate a transcript able to completely change the perception of the object-subject. As an example, the trained system architecture used to autonomously drive an automobile consists of 9 layers. Each has been trained using real data recordings. 

    Yes, reading human brain neural signals is different than a standard data set fed into an AI system but it does hold the same principal as a basis and, as current reality reveals, has been achieved at some level. All in all, the process acknowledges itself as fairly straightforward: 


    - the individual identifies an object, space, shape, etc. able to produce strong emotional sensibility, by using sensory receptors within the different parts of the brain (in this case, visual); 

    - stores this information as either short or long-term memory; 

    - electrodes attached and able to read the sensory signals within the different brain compartments are then forwarding the impulses to an AI mechanism (DNN), consisting of a series of multiple varied algorithms; 

    - the AI is given the opportunity to read these signals, understand, analyze and conclude them in its own machine learning perceptional ability and output them as a new environment in the form of an image, vector coordinate data that is then directly translated into the real environment the individual resides in. The final output will most probably be in the shape of a holographic projection or built entity that completely changes the existing space. As a result, the individual begins to appropriate new understandings of new realities, acquire new visual stimuli and thus continuously feeding the system with new information and data. A feedback loop is formed between the user and the architecture of the algorithm, always in motion, always renewed, with endless material prospects. Yes, as a process in itself, it is a fairly straightforward operation. From a human ability point of view however, at this point in time of our reality, the possibility of understanding and reading the human mind with the help of a machine at such a deep level is somehow over the horizon line still.But…not unreachable. The prospect of brain-computer interface collaboration in order to reveal new latent worlds already existing within each individual’s perception of the current reality is beautiful and enticing, worthy of exploration and ‘atonement’.

    Figure 12 Diagram showcasing the possible process for new envi-ronment typology formation based on sensory receptors from the individual. Source: author

    BIBLIOGRAPHY:

    1.Wikipedia, ‘Damnatio Memoriae’. Accessed December 3rd, 2020; https://en.wikipedia.org/wiki/...

    2.Dr. Francesca Tronchin, ‘Damnatio memoriae – Roman sanctions against memory’, Khan Academy, last modified May 2018. Accessed December 3rd, 2020 https://www.khanacademy.org/hu...

    3. Eric Varner, ‘Mutilation and Transformation: Damnatio Memoriae and Roman Imperial Portraiture’ (E.J. Brill, 2004)

    4. Rome 101, ‘The Cancelleria Reliefs’. Accessed December 3rd, 2020; https://www.rome101.com/Cancel...

    5. Wikipedia, ‘The Severan Tondo’. Accessed December 9th, 2020https://en.wikipedia.org/wiki/...

    6. Jorge Luis Borges, The Library of Babel, (Argentina: Editorial Sur, 1941), p.117

    7. Donald Favareau, Essential Readings in Biosemiotics, Biosemiotics 3 (Springer Science + Business Media B.V, 2010), p.88

    8. Saul McLeod, ‘Working Memory Model’, Simply Psychology, last modified April 2012. Accessed December 5th, 2020 https://www.simplypsychology.o...

    9. The Human Memory, ‘Human Memory Storage’, November 25th, 2020. Accessed December 7th, 2020; https://human-memory.net/memor...

    10. Shireen Parimoo, ‘Hippocamus – Amygdala Communication Supports Pattern Separation of Emotional Memories’, 2019. Accessed December 15th, 2020; https://www.brainpost.co/weekl...

    11. Regina Bailey, ‘Amygdala’s Location and Function’, July 2019. Accessed December 15th, 2020; https://www.thoughtco.com/amyg...

    12. Old Book Illustrations, ‘Souvenir of Dickens’, December 2014. Accessed December 18th, 2020; https://www.oldbookillustrations. com/illustrations/souvenir-dickens/

    13. Terry Heick, ‘What’s a feedback loop in Learning? A Definition for teachers’, August 2020. Accessed December 18th, 2020; https://www.teachthought.com/l...

    14. Agisoft, ‘Agisoft Metashape’. Accessed December 18th, 2020 https://www.agisoft.com/

    15. Wikipedia, ‘Artificial Intelligence’. Accessed December 18th, 2020; https://en.wikipedia.org/wiki/...

    16. Kelly Servick, Artificial Intelligence turns brain activity into speech, Science – January 2019, p.2

    17. Wikipedia, ‘Umwelt’. Accessed December 18th, 2020 https://en.wikipedia.org/wiki/...

    18. Felp Roza, ‘End-to-end learning, the (almost) every purpose ML method. Can E2E be used to solve every Machine Learning problem?’, May 2019. Accessed December 18th, 2020 https://towardsdatascience.com...

    19. Casey Reas, Ben Fry, ‘Processing’; https://processing.org/












     
    • No Comments

    • Block this user


      Are you sure you want to block this user and hide all related comments throughout the site?

      Archinect


      This is your first comment on Archinect. Your comment will be visible once approved.

    • Back to Entry List...
  • ×Search in:
 

About this Blog

The blog intends to focus, explore and give insights into the dynamic intersection and synergy between Artificial Intelligence and modern architecture. My blog delves into the latest AI tools revolutionizing architectural design, wants to examine their impact on efficiency and creativity and emerging trends in intercompatibility among advanced design software. The main themes: AI tools in Architecture, Innovative Design Solutions, Future Trends and Innovations.

Authored by:

Recent Entries