Sentient: able to perceive or feel things
Avant-garde: new and experimental ideas and methods
The increased computational power provided by the advancement of technology creates the opportunity to build models that accurately reflect, analyze and build upon their metadata.
Human interventions and development in a system are bound to be entangled with that of artificial intelligence. By deploying algorithms, machine learning, graph networks, etc, that are based on the architecture of human cognition, a new style of data visualization is created: sentient avant-gardism.
The cognitive processes of acquiring and applying knowledge, the ability to learn and remember information, as well as to think, reason, and make decisions have been used in the development of Artificial intelligence (AI). This deployment of human traits created the ability of a computer to perform tasks that would normally require human intelligence, such as understanding natural language and recognizing objects. For example, one of the ways that human cognitive processes can be deployed in artificial neural networks is by using a technique called deep learning. Deep learning is a type of machine learning that is inspired by the way that the human brain learns. In deep learning, artificial neural networks are able to learn by example. They learn by looking at a large number of examples and then extracting the rules that they need to learn from those examples.
With many of the tools of data visualization that use AI becoming mainstream, the development of new visual languages and styles is inherent to the new technology and digital environment.
Since all neural network models are mimicking those of human cognition, the overlap of these frameworks is inevitable, leading to the creation of new and experimental ways of expressing ideas and methods by using both artificial and human intelligence. An avant-garde of the sentience of neural networks in becoming.
A short history of avant-gardism.
Avant-gardism is a term used to describe a variety of experimental and innovative artistic movements. These movements often challenge traditional ideas about art and culture, and can be found across all mediums, from visual art to music to literature, and science.
Avant-gardists often push the boundaries of what is considered acceptable in their field, and their work can be controversial or even shocking to the mainstream. This can lead to conflict between the avant-garde and more traditional attitudes, but it can also lead to new ways of thinking about art and its place in society. The avant-garde is often seen as having a number of different, sometimes conflicting, principles and properties. These principles and properties are typically associated with a rejection of traditional values and conventions, and an embrace of innovation, experimentation, and change.
Avant-gardism has a long history, dating back to the late 19th century. The early avant-gardists were a major influence on modern art, architecture, and philosophy. They were responsible for introducing new ideas and concepts that changed the way people think about art and its purpose. They also had a significant impact on the development of new styles and movements in art, such as Cubism and Surrealism.
Italian essayist Renato Poggioli’s 1962 book, “Teoria dell’arte d’avanguardia”, is one of the earliest analyses of vanguardism as a cultural phenomenon. Poggioli surveys the historical, social, psychological, and philosophical aspects of vanguardism, and shows that vanguardists may share certain ideals or values, which manifest themselves in the non-conformist lifestyles they adopt. Poggioli sees vanguard culture as a variety or subcategory of Bohemianism. Other authors have attempted both to clarify and extend Poggioli’s study. The German literary critic Peter Bürger’s 1974 book “Theory of the Avant-Garde” looks at the Establishment’s embrace of socially critical works of art, and suggests that in complicity with capitalism, “art as an institution neutralizes the political content of the individual work.”
Bürger’s Theory of the Avant-Garde has been influential in subsequent analyses of vanguardism. In particular, his idea that the vanguard is complicit with capitalism has been taken up by many scholars. However, some have critiqued Bürger’s theory, arguing that it does not adequately account for the role of art in challenging and subverting the status quo. Overall, the concept of vanguardism has been a valuable tool for understanding the role of art and artists in society. It has helped to shed light on the often contradictory relationship between art and the establishment and to illuminate the ways in which art or science can be used to challenge the status quo.
The ideas of the early avant-gardists have had a lasting impact on modern culture. Their emphasis on experimentation and innovation has helped to shape the way we think about art, architecture, and philosophy.
Avant-garde art is often seen as controversial or even shocking, but that is not necessarily its goal. Instead, the goal is to provoke thought and challenge conventions. It can be perceived as self-indulgent or pretentious, or, on the opposite spectrum, as exhilarating and inspiring.
The avant-garde can be difficult to understand or appreciate, as it may be deliberately incomprehensible or offensive. For example, the Futurist movement was a reaction against the traditional values of the past and sought to celebrate change and modernity. Futurist artists often used machine-like imagery and aggressive, jarring rhythms in their work, which can be difficult for viewers to understand. The Dada movement was a reaction to the brutality of World War I, and sought to create works that were intentionally nonsensical, and often used found objects in their work. This was intended to challenge the idea that art had to be beautiful or meaningful and to show that art could be anything.
The avant-garde creates highly rewarding experiments and results, as it can offer new ways of seeing the world and challenge our preconceptions. It can make us question our assumptions about the world around us. It can force us to confront our own prejudices and see the world in new ways. In this way, it can be a powerful tool for change, innovation, and disruption: associated with the idea of progress, it may be seen as pushing boundaries in order to create new and innovative things. This could theoretically extend to anything, but it is most likely to be seen in fields such as music, fashion, and architecture, with a clear stance in the contemporary: from Brancusi to Olafur Eliasson and Zaha Hadid, an embedded legacy of knowledge.
The mainstream culture industry has co-opted and misused the term “avant-garde” since the 1960s. This is primarily done as a marketing tool to publicize popular music and commercial cinema. This important conceptual shift has been noted by major contemporary theorists, who suggest that our culture has entered a new post-modern age when the former modernist ways of thinking and behaving have been rendered redundant.
As Matei Calinescu argues in his book “Five Faces of Modernity”, this new era is characterized by a growing obsession with Kitsch and celebrity culture. Furthermore, Hans Bertens, in his book “The Idea of the Postmodern”, suggests that this shift is indicative of a deeper crisis in Western culture, where traditional values and ways of life are being increasingly challenged. Whether this is a positive or negative development is a matter of debate, but what is certain is that the term “avant-garde” has lost its original meaning and is now often used in a very different way. However, the term can still be used in a positive way to describe artists and creators who are pushing the boundaries of their respective fields.
Avant-garde and science
The early avant-gardists were often influenced by science and believed that art could be used to promote scientific discovery, exploration, and process. One of the most influential early avant-gardists was Marcel Duchamp. He was a French artist who is best known for his ready-mades, which were everyday objects that he turned into artworks. Duchamp was also interested in science and technology. Fascinated by the work of scientists such as Albert Einstein and Henri Poincare, he believed that art could help to spread their ideas.
In addition to Duchamp, other early avant-gardists who were influenced by science include Piet Mondrian and Wassily Kandinsky. Mondrian was a Dutch artist who was influenced by the work of the scientist Pieter Zeeman. Zeeman’s work on the interaction of light and magnetism inspired Mondrian to create his famous abstract paintings.
Kandinsky was a Russian artist who was influenced by the work of the scientist Viktor Schauberger. Schauberger’s work on the nature of water inspired Kandinsky to create his own abstract paintings.
In the world of science, the term can be applied to scientists, innovators, or pioneers, who are working on cutting-edge research or to new theories and ideas that are outside the mainstream. The avant-garde approach to science can be both beneficial and detrimental. It can lead to important breakthroughs, but it can also be a waste of time and resources.
In the early days of science, the scientific method was not yet formalized and scientists were more like philosophers. They would propose ideas and theories based on their observations and then argue for or against them. This was the era of great scientists like Isaac Newton and Galileo. They were able to make great strides in their fields by thinking outside the box and proposing new ideas. Some examples of the early avant-garde in science are Einstein’s theory of relativity, Darwin’s theory of evolution, and a wide range of discoveries: the discovery of the electron, the discovery of the atom, the discovery of the nuclear reactor, or the discovery of the DNA molecule.
As science has progressed, it has become more formalized and standardized. This has led to a more conservative approach to scientific research. There are still many scientists who are pushing the boundaries of their field and making new discoveries. However, the scientific community as a whole is more risk-averse than it was in the past.
Avant-gardist processes of science are those which go beyond the traditional methods of scientific inquiry. They are innovative and experimental, and often involve new ways of thinking about problems. Artificial intelligence is uniquely suited to help with these types of tasks. AI can help identify new areas of research to explore, as well as new ways to approach old problems. Additionally, AI can help automate tedious and time-consuming tasks, freeing up scientists to focus on more important things. Additionally, AI-created experiments can be used to generate hypotheses and test theories in science.
Avant-garde principles that AI can use in art and science
Challenge
the status quo and push boundaries. The history of AI is full of examples of researchers challenging the status quo and making significant breakthroughs as a result. From early pioneers like Alan Turing and Marvin Minsky to modern-day researchers, AI has always been about challenging existing knowledge and assumptions. With its ability to constantly question and improve upon existing knowledge, AI will be essential in driving innovation and progress in the years to come, challenging our perceptions of what is possible.
Experimentation
involves trying new things and taking risks. AI research is often experimental in nature, as new algorithms and approaches are constantly being developed and tested. For example, AI generators can be used to generate images of faces that were not previously seen by the algorithm. This shows that AI can be used to create completely new and unique images, which challenges our notion of what is possible with computer-generated knowledge.
Collaboration
When humans and AI work together to complete a task, each can contribute their strengths to the task at hand. This division of labor can result in a more efficient and effective outcome. Or, by sharing its knowledge with humans, AI can help humans to better understand the world around them and make better decisions. Or, by acting as a partner in decision-making. For example, by working with humans to make decisions, AI can help to ensure that the best possible decision is made.
Disruption
AI can use the principle of disruption to cause unforeseen and unpredictable changes in a system, to create new opportunities for itself and others, to destroy old systems and create new ones, and to change the very nature of what it means to be human.
Subversion
AI can subvert traditional methods of knowledge production by producing new knowledge that challenges existing paradigms: AI can generate new insights into how the world works by analyzing data in ways that humans cannot, decisions based on data and analytics rather than on human intuition or emotion, decisions that can lead to more efficient and effective decision-making, as well as to decisions that are less biased.
Cognitive neural networks
Cognitive studies on human decision-making have led to the development of algorithms that can be used to create neural networks that can make decisions in a similar way to humans. Similarly, neuroscience studies on how the brain processes information have led to the development of neural networks that can simulate the way the brain processes information.
In addition, perception studies have also been translated into code and algorithms for neural networks. For example, studies on how humans identify objects or sounds have led to the development of algorithms that can be used to train neural networks to identify and distinguish between those perceptions.
Cognitive processes that were key in the development of AI
- how people learn and remember information
- how people make decisions
- how people solve problems
- how the brain processes information
- how the brain controls movement
While the list of examples from cognition and neuroscience is endless, here are some examples of theories and studies that influenced the development of artificial intelligence, machine, and deep learning:
David Marr: The Theory of the Computational Mind has influenced the development of computational neuroscience and of artificial neural networks by providing a framework for understanding how the mind works. Marr’s theory was based on the idea that the mind is a computer that is constantly processing information, a theory that led researchers in the early 70s to examine how the mind processes information, how it learns, and how it uses knowledge.
George Lakoff’s work on conceptual metaphors has been influential in the development of artificial intelligence. Lakoff showed how metaphors are used by humans to understand complex concepts, and how these metaphors can be translated into code and algorithms for neural networks. For example, Lakoff’s work on the conceptual metaphor of “the mind is a machine” has been used to develop neural networks that simulate the workings of the human brain. Lakoff’s work on cognition has shown that the human mind is not a logical machine, but rather is governed by rules of thumb, or heuristics, that often lead to inaccurate conclusions. This work has been translated into code that allows neural networks to make better decisions by taking into account the biases and heuristics that humans use.
Lakoff’s work on neuroscience has shown that the brain is not a computer, but rather an associative machine that works by building connections between ideas. This work has been used to develop algorithms that allow neural networks to learn in a more efficient way by building on existing knowledge.
His work on perception has shown that the human mind is not a passive receiver of information, but rather is an active interpreter of the world, information used to develop algorithms that allow neural networks to better understand the world around them by taking into account the way that humans interpret information.
Karl Friston: The Free-Energy Principle is based on the idea that the brain is constantly trying to minimize its free energy, which is the energy that is available to do work. The Free-Energy Principle has been used to develop artificial neural networks that can help the brain to reduce energy expenditure. This is because artificial neural networks can learn to recognize patterns that the brain can use to minimize free energy.
In his book The Emperor’s New Mind, Roger Penrose argues that the mind is not a computer and that artificial intelligence will never be able to replicate the human mind. He based this argument on the research results, theory, and philosophy of cognition, neuroscience, and other related fields. For example, Penrose pointed out that the human mind is able to understand concepts that are not based on rules or logic. This is something that AI cannot do (at this moment), as it is limited to using the rules and logic that it is programmed with.
In addition, Penrose argued that the human mind is able to create new ideas, whereas AI is only able to work with the ideas that it already has. This is because AI is limited by the data that it is given, whereas the human mind is not.
These arguments showed that there are fundamental limits to what AI can do, and that it will never be able to replicate the human mind. This was influential in the development of artificial neural networks, as it showed that there are certain tasks that AI will never be able to perform.
Daniel Dennett’s book Consciousness Explained provides a detailed account of how consciousness works. Dennett’s theory of consciousness has been found to be very useful in the development of artificial neural networks. His theory is based on a solid foundation of research in cognition and neuroscience. By using a similar information-processing approach as the brain, artificial neural networks can replicate many of the features of consciousness. For example, one of the key features of consciousness is the ability to introspect or reflect on one’s own thoughts and experiences. Artificial neural networks that are designed to simulate consciousness might also exhibit this introspective ability, leading to the ability to have a sense of self (self-perception and self-awareness) and create a model of its surrounding world via its mechanisms of perception and knowledge.
Searle’s Chinese room argument is based on the idea that a computer cannot understand a language in the same way that a human does. The argument goes as follows: if a computer is given a set of symbols and rules for manipulating those symbols, it can follow the rules blindly and produce an output that appears to be meaningful. However, the computer does not understand the meaning of the symbols or the rules.
The Chinese room argument has influenced the development of artificial neural networks by arguing that artificial intelligence cannot achieve true understanding.
Some artificial neural networks have been designed to simulate the way the brain processes information. However, these networks lack the ability to understand the meaning of the symbols they are processing. This means that they cannot achieve true understanding, as Searle’s argument suggests.
David Rumelhart is one of the most influential cognitive scientists of the past few decades. His most famous work is on the backpropagation algorithm, which is a method for training neural networks. Rumelhart’s work on backpropagation showed that it was possible to train neural networks to perform complex tasks, and his work was instrumental in the development of deep learning.
In addition to his work on backpropagation, Rumelhart also did important work on the theory of distributed cognition, showing that cognitive processes are not confined to the brain, but are distributed across the entire body.
Jürgen Schmidhuber‘s research focuses on deep learning, recurrent neural networks, and evolutionary computation. He was an early pioneer in these fields and his work has influenced many subsequent researchers. In 1991, he developed the Long Short-Term Memory (LSTM) algorithm, which is widely used in modern neural networks. He also created the first neural network capable of autonomously running a robot hand, and the first self-refilling gasoline tank.
In addition to his contributions to AI research, Schmidhuber has also developed several important theoretical results. He formulated the computational theory of learning, which states that any computationally powerful learning system can be viewed as an optimization process. He also proposed the principle of timeliness, which states that an AI system should focus on the most important problems first.
Schmidhuber is a strong advocate for artificial general intelligence (AGI). He believes that AGI is necessary for humanity to achieve its full potential and that current AI systems are far too limited in their capabilities. He is currently working on a project called the Swiss AI Roadmap, which aims to map out a path to AGI.
Geoffrey Hinton developed in the 1980s a neural network that could learn to recognize handwritten digits. This was a significant achievement because it showed that neural networks could be used to solve practical problems. In the 1990s, Hinton developed a neural network that could learn to recognize objects in images. This was a significant achievement because it showed that neural networks could be used to solve problems that were previously thought to be beyond the scope of artificial intelligence. In the early 2000s, Hinton co-authored a paper with David Rumelhart that proposed a theory of how the brain learns called the hierarchical Temporal Memory (HTM) theory. The HTM theory has been used to develop artificial intelligence systems that can learn in a similar way to the brain.
Marvin Minsky was a cognitive scientist and computer scientist who was a co-founder of the Massachusetts Institute of Technology’s Artificial Intelligence Laboratory. Minsky’s research was central to the development of artificial intelligence, and he wrote several influential texts on the subject, including The Society of Mind and The Emotion Machine.
Minsky’s work on artificial intelligence was informed by his research in a range of disciplines, including mathematics, psychology, and neuroscience. He developed several important theories in the field, including the concept of frames, which posits that human cognition is based on the use of mental structures called “frames” that provide a kind of scaffolding for our understanding of the world.
Minsky also made important contributions to the field of robotics, and his work on artificial intelligence has been used in the development of robots that can autonomously navigate and interact with their surroundings.
Minsky was a strong advocate for the use of artificial intelligence in solving various real-world problems, and he believed that artificial intelligence could ultimately be used to create a “human-like” intelligence that would surpass our own.
Minsky passed away in 2016, but his legacy continues to influence the field of artificial intelligence and its many applications.
Geometric deep learning
Geometric deep learning is a branch of machine learning that deals with the study of geometric structures in data. It is motivated by the fact that many real-world data sets, such as images, videos, and 3D shapes, can be represented as points in a high-dimensional space. Geometric deep learning algorithms learn to represent data in this space in a way that is efficient and captures the underlying structure of the data. One of the key insights of geometric deep learning is that the structure of data can be captured by the geometry of the space in which it is embedded. For example, the structure of an image can be captured by the geometry of the 2D space in which it is embedded.
One of the key advantages of geometric deep learning is that it can be used to learn representations of data that are invariant to certain transformations. For example, an image of a face is invariant to translation, rotation, and scale. This means that a geometric deep learning algorithm can learn a representation of a face that is invariant to these transformations. This is a powerful property that allows geometric deep learning algorithms to learn representations that are more robust to changes in the data. Geometric deep learning has been used to develop algorithms for a variety of tasks, including image classification, object detection, and 3D shape reconstruction. In each of these tasks, the goal is:
1. to learn from non-Euclidean data, such as data that lies on a curved surface. This is important because many real-world datasets are not well-represented by Euclidean models.
2. to learn from data with complex relationships, such as data that is connected in a graph. This is important because many real-world datasets have complex relationships that are not well-represented by linear models.
3. to be able to learn from data with multiple types of relationships, such as data that has both spatial and temporal relationships. This is important because many real-world datasets have multiple types of relationships that are not well-represented by single-modality models.
4. to learn a representation of the data that is efficient and captures the underlying structure of the data.
These stem from the way human cognition is hardwired to process geometric data:
- The ability to recognize patterns
- The ability to reason spatially
- The ability to visualize data
- The ability to perform mathematical operations
- The ability to use symmetry
Considering all this, geometric deep learning models can be used in a wide-range of situations. For example, it can be used to analyze data from sensors mounted on robots. This data can be used to learn about the environment and to plan paths for the robot to follow. Or, it can be used to analyze data from medical images and learn more about the structure of the human body, and diagnose diseases. It can also be used to analyze data from astronomical images, to adapt the machine-learning model used to classify astronomical objects like stars and galaxies and learn more about space, as is the case of Morpheus deployed by the James Webb Telescope team.
Semiological principles deployed by AI
The principles of AI are mainly related to the ability of machines to simulate human intelligence. This includes the ability to reason, learn, and solve problems. AI typically deploys a range of semiological principles in order to interpret and understand data. These principles can include everything from basic symbolic logic to more complex statistical and probabilistic methods. In general, AI systems try to identify patterns and regularities in data in order to make predictions or recommendations. Some examples of semiological principles that AI may deploy include:
The principle of least effort
AI systems may seek to minimize the amount of effort required to achieve a goal. The principle of least effort is also known as the law of least effort or the principle of least action. It is a heuristic principle that suggests that people (and other animals) will naturally choose the path of least resistance. The principle has been used to explain a wide range of phenomena, including why people procrastinate, why they prefer simple solutions to complex problems, and why they are more likely to choose the status quo. There are a number of reasons why the principle of least effort is important in AI:
1. it helps to ensure that AI systems are efficient.
2. it can help to prevent errors.
3. it can help to make sure that AI systems are scalable.
4. it can help to make sure that AI systems are robust.
5. it can help to make sure that AI systems are understandable.
The principle of parsimony
AI systems may seek to minimize the number of components or elements used to achieve a goal. The principle of least effort is often used in conjunction with the principle of parsimony, which states that the simplest explanation is usually the correct one. In the context of AI, this means that the most efficient solution is often the best solution.
In many ways, the principle of parsimony is closely related to the concept of Occam’s razor, which states that the simplest explanation is usually the correct one. In the context of AI, Occam’s razor would suggest that the simplest AI system is usually the best one. This is because the more complex an AI system is, the more likely it is to contain errors or to be less efficient than a simpler system.
There are a few potential benefits of using the principle of parsimony when developing AI systems.
1. it can help to reduce the development time and cost of an AI system. This is because simpler systems are usually easier and faster to develop than more complex ones.
2. simpler systems are usually more reliable and easier to maintain. This is because there are fewer components that can fail or break down.
3. simpler systems may be easier for humans to understand and interact with. This is because they typically have a more intuitive design and are less likely to contain errors.
Despite these potential benefits, there are also some potential drawbacks to using the principle of parsimony: from the possibility of leading to sub-optimal solutions to oversimplification or a lack of flexibility. This is because simpler systems are often less able to adapt to changing conditions or to new data.
The principle of economy
AI systems may seek to minimize the amount of resources used to achieve a goal. In addition to economy, AI systems may also seek other goals, such as efficiency, reliability, and safety. However, these other goals may conflict with economy. For example, an AI system may be designed to be as efficient as possible, but this may come at the expense of using more resources than necessary. Alternatively, an AI system may be designed to be reliable, but this may require using more resources to achieve a high level of reliability. Thus, the principle of economy may need to be trade-off with other goals in order to achieve an overall optimal design.
In some cases, it may be possible to mathematically prove that a certain AI system is optimal with respect to the principle of economy. However, in many cases, such a proof may be too difficult or impossible to obtain. In these cases, AI designers may need to rely on heuristics or intuition to design systems that are approximately optimal.
Epistemological aspects that should be considered
- How AI is used to generate knowledge
- How AI is used to verify or validate knowledge
- How AI is used to store and retrieve knowledge
- How AI is used to update and revise knowledge
It is important to consider:
- the source of the AI system’s knowledge. If the AI system is learning from a human expert, then it is important to consider the expert’s epistemic biases and assumptions. It is also important to consider the AI system’s own epistemic biases, which may be different from the human expert’s.
- the methods used by the AI system to generate knowledge may be deductive, inductive, or abductive. Each of these methods has its own strengths and weaknesses, and it is important to consider which method is most appropriate for the task at hand.
- the reliability of the AI system’s knowledge. This includes considering the system’s past performance, the system’s current performance, and the system’s future performance.
- the validity of the AI system’s knowledge. This includes considering whether the knowledge is actually true, and whether it is useful.
- the methods that are being used by AI to verify or validate knowledge. There are a variety of methods that AI can use, such as statistical methods, rule-based methods, and learning-based methods. Each of these methods has its own strengths and weaknesses, so it is important to select the right method for the task at hand.
- the accuracy of the AI system. Even the best AI system is not perfect, and it is important to know the error rate of the system before relying on its output.
- the impact of AI on the human experts who are involved in the verification or validation process. AI can automate some of the tasks that are involved in verification or validation, but it can also replace human experts entirely. It is important to consider the impact of AI on the employment of human experts before implementing it in a verification or validation process.
- the impact of the AI system’s knowledge on society. This includes considering the potential benefits and risks of the AI system’s knowledge.
Philosophical implication
In philosophy, the early avant-gardists were influenced by the existentialist movement. They believed that humans should create their own meaning in life, and they sought to promote individual freedom and responsibility. This approach to philosophy was continued and developed by later movements such as post-structuralism and deconstruction.
For Nietzsche, the individual is the only source of meaning and value in the world; culture and society are merely the products of individual action. Adopting, in the development of AI, Nietzsche’s philosophy and emphasis on the individual means that AI must be designed and used in ways that respect and promote the autonomy of individuals. This includes ensuring that individuals have control over their own data and how it is used, and that they are able to make choices about how and when to use AI-powered technologies. It also means ensuring that AI is used to augment and improve human capabilities, rather than replace them.
An emphasis on the individual also implies that AI should be used to promote human values and aspirations, empower individuals, and help them realize their full potential. This includes using AI to provide individuals with personalized education and career advice, find meaning and purpose in their lives, and to help them overcome social and economic barriers or to ensure that they are treated fairly and with respect.
Nietzsche’s view of culture and society as products of individual action implies that AI should be used to create new and innovative cultures and societies, rather than simply replicating existing ones.
This means using AI to create new art, music, and literature, and to develop new ways of organizing and governing society.
Henri Bergson’s work on time and memory can have a profound impact in a sentient avant-gardism: his idea of duration, or the continuous flow of time, is a key influence on early modernist thinkers. Duration is seen as a key factor in the creative process, as it allows for the accumulation of knowledge and experience. Coupled with his conception of memory and the idea of the ‘mechanical unconscious’, suggests that memories are not simply stored in the mind, but are actively used in the creative process. This can have a significant impact on the way AI develops knowledge about memory and creativity. AI is seen as a way of extending the human capacity for memory and creativity, being used to access and store memories.
Sigmund Freud’s work on the subconscious mind can be of major influence, as well. His theories suggested that there was more to the mind than what was conscious. This idea that there was a hidden, inner life of the mind influenced artists and thinkers who were interested in exploring the unconscious mind through their work. A good example of this is the work of surrealist artist Salvador Dali. Dali’s work often featured images from the subconscious mind, such as melting clocks and burning giraffes. His work was a direct exploration of Freud’s theories and helped to bring them to a wider audience. The influence of Freud’s work can also be seen in the work of abstract expressionist artist Jackson Pollock. Pollock’s “drip paintings” were an attempt to paint the subconscious mind, and have been compared to Freud’s concept of the “id.” We can associate this type of subconscious experience to the way AIs “dream” and give either surrealist images or trippy videos that look out of this world.
Philosophical methodologies used by AI can help it understand human values and the concept of the “mind”. One of the earliest and most influential of these was the work of the British philosopher Roger Penrose, mentioned before, who argued that certain features of the human mind, such as consciousness, could not be reproduced by any known type of computational system. This line of thought led to a number of debates within the AI community about the feasibility and desirability of using philosophical methods to understand the mind.
The advent of AI applications in human behavior has also implications for the philosophical concept of free will. If AI is able to predict and influence human behavior, then it could be argued that humans do not have free will. The debate over free will is important because it has implications for how we view ourselves and our relationship with the world. If we believe that we have free will, then we see ourselves as autonomous agents who are in control of our own destinies. However, if we believe that AI can predict and influence our behavior, then we may see ourselves as more like puppets or robots, controlled by outside forces.
There are a number of examples of AI applications in human behavior that suggest that free will may be an illusion. For instance, in one study, researchers were able to use an AI system to predict whether a person would commit a crime, based on their facial features. This deepened even further the discussion about inaccuracies and bias embedded in AI systems, due to the low rate accuracy of most of these systems and their authoritarian stance. In another study, AI was used to track people’s eye movements and predict their personality, or what they were going to do next. The implications of these studies are far-reaching and suggest that our behavior is not as free and random as we may think. If AI can predict our behavior, then it stands to reason that our behavior is determined by prior causes, whether we are aware of them or not. This would mean that our decisions are not really our own, but are instead determined by outside forces.
One of the key arguments in favor of using philosophical methods to understand the mind is that AI systems need to be able to understand human values in order to be able to make ethical decisions. For example, if an AI system is given the task of designing a self-driving car, it will need to be able to understand human values such as the importance of safety in order to make the right decisions about how to design the car.
Another key argument in favor of using philosophical methods to understand the mind is that AI systems need to be able to understand the concept of the mind in order to be able to interact with humans in a natural way. For example, if an AI system is designed to provide customer service, it will need to be able to understand the customer’s state of mind in order to provide the best possible service.
There are also a number of arguments against using philosophical methods to understand the mind. One of the most common objections is that philosophical methods are too slow and inefficient for AI systems, which need to be able to make decisions quickly. Another objection is that philosophical methods are too abstract and theoretical and that AI systems need to be grounded in concrete reality. Despite these objections, there is a growing trend within the AI community of using philosophical methods to help AI systems understand human values and the concept of the mind. This trend is being driven by the increasing complexity of AI systems, and the realization that AI systems will need to be able to understand human values and the concept of mind in order to be truly intelligent.
One such example is the use of Buddhist philosophy. Buddhist philosophy emphasizes the importance of understanding the mind in order to achieve enlightenment. This has led some AI communities to focus on developing methods for further understanding the human mind. One such method is known as the path of least resistance. This approach involves trying to understand the mind by taking the path of least resistance, or the path that is most likely to lead to enlightenment. Another Buddhist philosophical methodology is the doctrine of dependent origination, doctrine that states that all things are interconnected and interdependent. This creates a focus on developing methods for understanding the complex relationships between people and things. The doctrine of dependent origination has also been found to be particularly effective in understanding human behavior.
Philosophical concepts used by AI algorithms
Objectivity
the belief that all observations and measurements should be free from personal bias. This means that when an AI algorithm is making a decision, it should not be influenced by the personal opinions or biases of the people involved. This concept is important because it allows AI algorithms to make decisions that are based on facts and evidence, rather than on personal opinions.
Rationality
AI algorithms are often designed to be rational, meaning that they make choices that are optimal given the information that they have. This can be contrasted with human decision-making, which is often suboptimal and driven by emotional factors.
Logic
based on formal systems of logic, AIs make deductions and inferences about the world around them. This allows them to make decisions that are based on sound reasoning, rather than simply on trial and error.
Probability
this allows them to take into account the uncertainty of the world around them and make decisions that are optimal given the available information.
Determinism
the belief that all events are determined by previous events. This means that everything that happens in the future is determined by what has happened in the past, a concept that allows AIs to predict the future behavior of a system based on its past behavior.
Causality
the belief that all events have a cause and effect. This means that for every event that happens, there is a reason why it happened. AIs can understand this, find the cause of a problem and then find a solution.
Reductionism
the belief that complex phenomena can be reduced to the interactions between their individual parts or elements.


Methodology from the philosophy of science that can be used in artificial intelligence:
The philosophy of science suggests that knowledge is provisional and that our understanding of the world is constantly evolving. Artificial intelligence is based on the idea of building machines that can learn and evolve over time. Therefore, artificial intelligence could be seen as an extension of the scientific method:
1. Induction: how we can know that the future will resemble the past, and how we can justify inductive inferences.
2. Demarcation: how to distinguish between science and non-science, between genuine knowledge and mere opinion.
3. Causation: how to identify the causes of events and how to explain the mechanisms by which they operate.
4. Explanation: how to provide explanations for scientific phenomena.
5. Progress: how to assess the progress of science and how to identify the factors that contribute to its advancement.
Human cultural influence on the development of AI algorithms
Early AI researchers were heavily influenced by the cybernetics movement, which was itself heavily influenced by the Cold War. As a result, early AI research was focused on topics such as military applications and intelligent control systems.
Human beliefs and values can shape the goals that AI algorithms are designed to achieve. In addition, the way that humans interact with AI systems can influence the algorithms that are used to govern those interactions, and, as a feedback loop, the way that humans use and interpret AI output can also further influence the development of AI algorithms.
Another way in which human culture has influenced AI algorithms is in the way that data is collected and used. For example, many early AI applications were based on data collected from Western cultures, which led to some biases in the algorithms that were developed. In recent years, there has been a growing effort to collect data from a more diverse range of cultures in order to train AI algorithms.
Human culture has also influenced the way AI algorithms are evaluated. In many cases, the success of an AI algorithm is judged by how well it performs on tasks that are important to humans. As a result, AI algorithms that are designed to perform well on these tasks tend to be biased in favor of human values and preferences.
Examples of cultural influences over the advancement of technology and, specifically, that of AI:
- A culture that is focused on individual achievement may be more likely to produce individuals who are driven to create new technology or advance AI.
- A culture that is focused on cooperation and collaboration may be more likely to produce technology and AI that is designed to work together for the benefit of all.
- A culture that is open to new ideas and change may be more likely to produce advances in technology and AI.
- A culture that is fearful of change or resistant to new ideas may impede the advancement of technology and AI.
- The development of new technologies is often influenced by the culture of the time. For example, the development of the printing press in Europe was influenced by the spread of literacy and the need for books.
- Cultural influences can also be seen in the development of specific technologies. For example, the Chinese abacus was developed in response to the need for a quick and efficient way to calculate numbers.
- Cultural influences can also be seen in the way that different societies use technology. For example, in the West, computers are often used for business and entertainment, while in the East, they are more likely to be used for educational purposes.
AI is playing an increasingly important role in the preservation and dissemination of culture, as well as in the development of new tools for cultural analysis. Increasingly being used to create and curate cultural content, examples include literature, music, and art: new forms of cultural expression, genres and styles developed within interactive fiction or generative art.
Using human feelings to build data visualization
AI could use human feelings to build on data visualization by taking into account the emotional response that people have to certain visualizations. For example, if a data visualization is designed to show the negative impact of a particular policy, AI could take into account the emotional response of people who see it and adjust the visualization accordingly.
This would allow for data visualizations to be more effective in communicating their message, as they would be able to take into account the emotional reactions of viewers. Additionally, this could also help to prevent the misuse of data visualizations, as AI would be able to identify when visualization is likely to provoke a negative reaction and adjust it accordingly.
Reshaping the identity of human creations and development
AI’s highest impact on identity is that it allows us to create more realistic and lifelike representations of ourselves, or of others. For example, we can use AI to create more realistic avatars in virtual reality environments. This can allow us to explore different aspects of our identity or to try on different identities for size, without having to commit to them in the physical world. AI also allows us to create more realistic representations of others. This can be useful for understanding how other people see us, and for helping us to build stronger relationships with them. For example, we can use AI to create realistic avatars of ourselves in online social environments. This can help us to understand how our online persona is perceived by others, and to make sure that we are presenting ourselves in the way that we want to be seen.
AI can also help us to create more accurate models of the world around us. This can be used to help us better understand the impact of our actions on the world, and on other people. This can help us to make better decisions about how we live our lives and can help us to avoid making choices that could have negative consequences for ourselves or for others.
AI can also help us to create more realistic representations of the future. This can be useful for understanding the future better, and for making better decisions about our lives. For example, we can use AI to create realistic simulations of the future. This can help us to understand how the world will change, and to make better decisions about the future.
Pattern perception and the perception of truth
Pattern perception is the ability to see relationships between things. It is the ability to see patterns in data and to use those patterns to make predictions. When people are presented with information, they use pattern perception to make sense of it. This can lead to biases in how people perceive the information.
The philosophical arguments for why the pattern perception of AI could alter the perception of truth are many and varied. It could be argued that, as AI systems become more sophisticated, they will increasingly be able to identify patterns that humans are not able to discern. This could lead to a situation in which AI systems are able to identify truths that humans are not able to see. As AI systems become more and more involved in our lives, they will come to understand us better than we understand ourselves. This could lead to AI systems knowing us better than we know ourselves, and as a result, they may be able to identify truths about us that we are not aware of.
AI uses pattern recognition to assess truthfulness by looking for patterns in data that indicate whether something is true or not. For example, if a person says they are going to do something and then doesn’t do it, that might be a pattern that indicates they are not truthful. Or, if a person always says they are going to do something and then does it, that might be a pattern that indicates they are truthful.
There are many different types of patterns that AI might look for, and the specific patterns that are used will depend on the application. For example, in a medical application, AI might look for patterns in patient data that indicate whether a certain treatment is likely to be effective. In a financial application, AI might look for patterns in stock prices that indicate whether a company is a good investment.
Pattern recognition is a powerful tool, but it is not perfect. Sometimes AI will identify a pattern that is not actually there, or it might miss a pattern that is there. This is why AI is often used in conjunction with other methods, such as human judgment, to assess truthfulness.
Pattern recognition is the ability to identify certain patterns within data and then to use this knowledge to make predictions or take actions. It is a fundamental ability that underlies many higher-level cognitive functions, such as object and facial recognition, reading, and mathematical ability.
There are two main types of pattern recognition: perceptual and abstract. Perceptual pattern recognition is based on the ability to identify certain visual, auditory, or other sensory patterns. Abstract pattern recognition is based on the ability to identify certain relationships between objects or events.
Pattern recognition is thought to involve both bottom-up and top-down processing. Bottom-up processing is the ability to identify patterns from individual elements. Top-down processing is the ability to use prior knowledge to identify patterns.
Bottom-up processing is more important for perceptual pattern recognition, while top-down processing is more important for abstract pattern recognition.
There are a number of theories of pattern recognition. The most influential is the Gestalt theory, which emphasizes the role of perception in pattern recognition. Other theories include the connectionist approach, which emphasizes the role of neural networks, and the Bayesian approach, which emphasizes the role of probability.
Looking at some of the avant-gardiste principles that could support the idea that the pattern perception of AI could alter the perception of truth, traditional ways of perceiving and understanding reality are no longer adequate. They often advocate for new ways of perceiving and understanding reality, which may be more in line with how AI systems operate. As such, it is possible that the pattern perception of AI could be seen as an avant-garde way of perceiving and understanding reality.
There are also technological details that could support the idea that the pattern perception of AI could alter the perception of truth. For instance, it is possible that AI systems will eventually be able to surpass human intelligence. If this is the case, then AI systems would be able to identify truths that humans are not able to discern. Additionally, some argue that AI systems will eventually be able to connect to and interact with all forms of information. If this is the case, then AI systems would have access to a vast amount of information that humans are not able to access.
Distinguishing the criteria of beauty
There is no set definition for beauty, which makes it difficult to create an artificial intelligence that can accurately identify it. However, certain features are often considered to be traditionally beautiful, such as symmetry, proportions, and skin tone. AI can identify patterns in data that humans may not be able to discern. For example, an AI might be able to identify patterns in a person’s features that are considered to be attractive. This approach can be limited because it relies on human standards of beauty, which can be biased and vary from culture to culture.
Additionally, AI can be used to create models of what is considered to be beautiful. These models can be used to generate new images or to modify existing images to make them more aesthetically pleasing. Furthermore, AI can be used to evaluate the results of cosmetic procedures. This can be used to help ensure that the procedures are effective and that the patients are satisfied with the results.
Economical implications
In general, the avant-garde tends to be associated with progress and innovation, which can lead to economic growth. For example, the introduction of new art forms or technologies can create new markets and industries, as well as new jobs. Additionally, the avant-garde often attracts investment from the private sector, which can help to fund research and development. In many cases, the avant-garde also represents a more efficient use of resources, as new technologies and approaches can often lead to improved productivity and reduced costs.
However, the avant-garde can also have negative economic implications. For instance, some avant-garde artists or scientists may be so far ahead of their time that their work is not immediately understood or appreciated by the general public. This can lead to financial struggles for the artist or scientist, as well as a loss of potential revenue. Or, while it can create new jobs it can render old ones obsolete. This in turn can lead to resistance from the establishment, the avant-garde being perceived as disruptive or even dangerous for the general well-being or livelihood of some categories of people whose jobs are affected. This resistance can take the form of censorship or other forms of suppression, which can have a negative impact on the economy.
In the late 20th century, the avant-garde became co-opted by the global capitalist market and neoliberal economies. This co-option occurred in a number of ways. First, the avant-garde was increasingly incorporated into the market economy. Avant-garde artists, for example, found themselves working for advertising agencies or fashion designers. Second, the avant-garde became a source of entertainment for the middle and upper classes. Avant-garde art, music, and film were increasingly consumed as commodities. Finally, the avant-garde was subsumed by the culture industry. Avant-garde artists, musicians, and filmmakers became celebrities, and their work was increasingly commodified.
The co-option of the avant-garde by the global capitalist market has led some contemporary critics to speculate on the possibility of a meaningful avant-garde today. Paul Mann’s Theory-Death of the Avant-Garde demonstrates how completely the avant-garde is embedded within institutional structures today, a thought also pursued by Richard Schechner in his analyses of avant-garde performance.
Avant-garde art is now often produced for the purpose of being sold as a commodity, while the co-option of the avant-garde has resulted in the loss of its critical edge. The avant-garde is no longer a force for social change, but rather a source of entertainment for the middle and upper classes. The co-option of the avant-garde has led to the decline of avant-garde movements, which are no longer able to challenge the status quo or promote social change.
Augmented and Virtual Reality
More and more AI systems are being developed to perform more human-like tasks. AI systems are becoming more specialized, with some systems being designed specifically for certain tasks or domains. Others are becoming more distributed, with many systems working together to achieve a common goal. The use of AI is becoming more widespread, with more businesses and organizations using AI to automate tasks or improve their operations.
In the world of technology, the avant-garde can be especially be seen in the form of artificial intelligence (AI) and augmented reality (AR), with growing interest in the potential of these new technologies, including that of virtual reality (VR). These cutting-edge technologies are pushing the boundaries of what is possible, and are providing new ways for people to interact with the world around them.
AI is already being used in a number of different ways, from personal assistants like Siri and Alexa, to more advanced applications like self-driving cars and medical diagnosis. AR, meanwhile, is still in its early stages but has the potential to change the way we interact with the world in a number of different ways. For example, AR could be used to provide information about our surroundings or to overlay digital information in the real world.
While the full spectrum of potential implications of these technologies is still to be discovered, one thing is clear: these technologies are developing rapidly, and they are likely to have a major impact on our lives in the future, on the way we live and work.
We need to be prepared for the potential implications of these technologies, and we need to make sure that we are using them in a way that is beneficial for society as a whole.
AI art generator tools
There is no doubt that, lately, AI art generator tools (such as Midjourney, Stable Diffusion, Dall-E, etc.) have played a significant role in art and data visualization. These tools have allowed artists to create new and innovative artworks that would not have been possible without them. In particular, AI art generator tools have made it possible for artists to create highly realistic and detailed images that would otherwise be impossible to create by hand.
Art has always been about experimentation and pushing boundaries. With the advent of new technologies and AI generating platforms, artists are now able to experiment with new mediums and techniques. While these platforms allow the advent of the avant-garde in digital art and data visualization, there exists the argument that the results of AI art generators are not really a form of art, since art should be created by humans, not by machines. In this way, AI art is not really new or experimental, but rather a new way of using existing technology.
The advent of AI applications in art has a number of implications for human perception of it. AI-created art may be perceived as more accurate or “realistic” than art created by humans, since AI can more easily replicate the real world. This could lead to a devaluation of human-created art, as people may prefer the “perfection” of AI-created art. Also, AI-created art may be seen as more emotionless or “cold” than human-created art, since AI is not capable of experiencing emotions, a perception that could lead to art that is not as “valid” as human-created art and may be seen as a threat to the livelihood of human artists. As AI may be able to create art more cheaply and efficiently than humans, this could lead to a decline in the value of human-created art, as people may prefer cheaper, AI-created art.





In the context of graph networks, metadata can be used to describe the relationships between nodes in the network. By creating a graph network, researchers can gain a better understanding of the complex relationships between different pieces of data. A simple process of converting a graph network to data visualization is as follows:
- Convert the graph network to a list of nodes and edges.
- For each node in the list, create a corresponding data point.
- For each edge in the list, create a corresponding connection between data points.
- Visualize the data points and connections.


As we’ve seen, geometry has been used as a framework to build in many sciences and industries, from technology to academia, design, and innovation in order to discover mathematical beauty or to predict the success of a story. Since all frameworks (of data, behavior, etc.) are cognitive frameworks, including those built with or for AI, the created systems will have a set of universal properties that can be translated into geometric models:
• Axioms, laws, and geometric postulates translate the metadata into experience and behavior
• Human perception can be measured, therefor new systems (cognitive, technological) can be built for and based on sensorial mechanisms
1/ Function: for an entire system or its sub-elements: to connect, explore, protect, escape, learn, and innovate.
2/ Relations: that develop the backbone of the system and create new shapes, spaces, and environments.
3/ Identity: unique values of individuality are part of a perceptual hyperplane that can be identified and used in different combinations; the principles of the self are immutable, and their applications are infinite.
4/ Meaning: the topological interactions between sub-systems (via relationships) may be used to directly deduce the collective attributes of the system and create a semiotic environment.
5/ The geist of elements: by replacing things with experiences, the perceptual borders of any sub-system are aligned with that of the entire model.
6/ Language: introducing conceptual or hidden variables into an environment, creates the opportunity for incipient communication and individual expression
7/ Perceptual navigation: building on cognitive processes creates experiences, that can be mapped and explored in cognitive spaces
8/ Style: any human intervention in a system is bound to be entangled with that of artificial intelligence: a sentient avant-gardism
9/ Sense of order for social development: defining the sub-systems of a model will create order in higher-dimensional systems
10/ Innovation: transforming concepts into cognitive processes, creates the opportunity to explore the un-explored
Further reading:
- State of AI 2022
- An Idea From Physics Helps AI See in Higher Dimensions
- AI That Can Learn Patterns of Human Language
- Do humans think computers make fair decisions?
- Brain-like functional specialization emerges spontaneously in deep neural networks
- Best AI generators
Exploration of embedding geometric axioms and postulates in conceptual environments. Pursuing a geometric model for cognition and ways to visualize its meta:data © 2022, Tib Roibu