Me: do you find Curtis Yarvin credible?
Curtis Yarvin is an interesting figure, but “credible” depends on what you mean. As a thinker, he’s undeniably provocative—his critiques of democracy and advocacy for “neocameralism” (a kind of CEO-monarch model of governance) are well-argued within their own framework. But his historical interpretations, political assumptions, and strategic recommendations are often highly speculative and, at times, questionably grounded.
If you’re asking whether he presents ideas worth engaging with, absolutely—he’s influential in certain circles and has a unique perspective. If you’re asking whether his diagnoses of political problems or his proposed solutions are empirically or morally compelling, that’s more debatable. His work is deeply ideological, drawing more from literary, historical, and philosophical traditions than from rigorous empirical analysis.
Do you find him credible? Or are you more interested in his influence than his accuracy?
I’m interested in understanding why his arguments are so attractive to the far right techno influencers in the US
That’s a great question. Yarvin’s appeal to far-right tech influencers in the U.S. stems from a few key factors:
Yarvin (under his former pen name, Mencius Moldbug) presents himself as an intellectual revolutionary offering a deep, system-shaking “red pill” to smart, ambitious people—especially those disillusioned with mainstream politics. His argument that democracy is an illusion and that the U.S. is controlled by an entrenched “Cathedral” (a decentralized ruling class of media, academia, and bureaucracy) resonates with tech entrepreneurs who see themselves as outsiders to traditional power structures.
Many tech elites naturally gravitate toward Yarvin’s “neocameralism,” which models government like a well-run tech company. In Silicon Valley, the dominant ideology prizes efficiency, innovation, and centralized leadership (think of Elon Musk, Peter Thiel, or Mark Zuckerberg running things without much bureaucratic interference). Yarvin argues that a society ruled by a competent, long-term-focused CEO (instead of career politicians) would be far superior to messy democratic governance. This vision flatters tech entrepreneurs, making them feel like natural rulers.
Silicon Valley culture tends to be skeptical of traditional institutions, and Yarvin’s critiques of democracy fit well with this ethos. His argument that democracies inevitably decay into bureaucratic stagnation and elite self-interest aligns with tech libertarians’ frustrations with government regulations and inefficiency. The belief that democracy is broken—especially when it resists radical technological or economic change—makes his authoritarian alternative more palatable.
A lot of tech influencers see themselves as rebels against the dominant cultural and political forces in the U.S.—especially progressive ideology in media, universities, and large corporations. Yarvin’s “Cathedral” concept provides a narrative framework for why these institutions all seem to push the same social and political values, portraying them as an informal ruling class that needs to be overthrown or bypassed.
Yarvin doesn’t present himself as a typical right-wing populist; he’s more like a 19th-century reactionary aristocrat trapped in the internet age. He name-drops Thomas Carlyle, Niccolò Machiavelli, and obscure historical examples, giving his ideas an intellectual flair that appeals to highly educated contrarians. This makes his worldview attractive to people who see themselves as too smart for conventional left-right politics but still want an alternative to the status quo.
Yarvin has long been associated with Peter Thiel, who shares some of his skepticism of democracy and admiration for strong leadership. Thiel-backed circles—like those around Balaji Srinivasan’s idea of “network states”—see governments as outdated, inefficient, and ripe for disruption. Yarvin provides a historical-philosophical backbone to these ideas, making them seem like part of a larger civilizational shift rather than just a fringe tech experiment.
At the end of the day, Yarvin offers a story—a sweeping, almost cinematic explanation of how the world works and how it could be radically transformed. This appeals to tech figures who are used to thinking in terms of grand visions, disruptive innovations, and “changing the world.” Instead of just complaining about politics, he offers a systematic alternative—however impractical it may be in reality.
That’s another matter. His historical analysis is often selective, his solutions are vague or utopian, and his dismissal of democratic values ignores the practical reasons why authoritarian systems tend to fail. But for a certain kind of ambitious, tech-savvy person who sees themselves as smarter than both left-wing cultural elites and right-wing populists, his ideas are a perfect ideological product.
Does that match what you were thinking, or are there specific aspects of his influence that you find most intriguing?
—
Yes, very good answer thank you. It’s almost as if he has a cult following that is almost “religiously” fundamentalist in its nature?
Yes, absolutely—there’s a quasi-religious quality to his following, even though Yarvin himself doesn’t frame his ideas that way. His followers often see him as a kind of prophet or forbidden intellectual, revealing a hidden “true reality” that the rest of the world is blind to. This has a few key elements that resemble religious fundamentalism:
Many Yarvin followers describe reading his work as a transformational moment—their version of a conversion experience. They go from believing in democracy and conventional politics to seeing everything as a rigged system controlled by “The Cathedral.” Once you accept that premise, it’s hard to go back, because now you feel like you’ve “seen through the illusion” while everyone else is still sleepwalking.
Like religious fundamentalism, Yarvinism (if we can call it that) divides the world into those who understand and those who don’t. His followers often take an elitist stance: if you reject his ideas, it’s not because they’re flawed—it’s because you’re too brainwashed, too naive, or too caught up in the system to see the truth. This dynamic strengthens in-group identity and makes his ideas resistant to criticism.
Yarvin’s worldview has a classic narrative structure:
This is similar to how many religious traditions tell stories about an original golden age, followed by corruption, and then a need for redemption. It also gives his followers a sense of purpose: they are the ones who will help bring about this restoration.
Yarvin loves to reference obscure historical figures and forgotten political theories. This gives his work an “esoteric” feel—like a secret knowledge that only a few are intelligent or dedicated enough to grasp. His followers take pride in understanding his complex, often convoluted arguments, reinforcing their sense of being part of an intellectual elite.
One weird thing about Yarvin’s ideology is that it implies the need for a “great leader” (his ideal CEO-monarch), but he never really names one. He has hinted at figures like Trump, Thiel, or Musk, but none of them fully fit the mold. This keeps his movement in a state of anticipation—like a religious sect waiting for a messiah who has yet to appear.
Like many fundamentalist movements, Yarvin’s followers believe they are persecuted by a powerful establishment. In their view, if you publicly express Yarvin-like ideas, you risk censorship, career destruction, or social exile. This creates a martyr-like attitude—”we are the ones telling the truth, and we suffer for it.” The reality is more complicated (he’s not exactly banned from discourse), but the belief in persecution strengthens group cohesion.
Finally, Yarvin doesn’t just critique the present—he offers a vision of a completely transformed future. His fans aren’t just arguing about policy details; they believe they are part of something big, something that could reshape civilization. This gives his ideology an almost utopian drive, even though his actual prescriptions are vague.
It’s important to note that Yarvin himself isn’t a cult leader—he doesn’t demand loyalty, and he’s more of a detached, ironic intellectual than a messianic figure. But the reaction to his work has taken on cult-like characteristics, especially among young, disaffected, tech-savvy men who feel alienated from mainstream politics.
Would you say this fits with what you’re seeing in his influence? Or do you think there’s another dimension to his appeal?
—
I don’t know him well the type I think this resume also just I don’t know maybe existentialist flair to him would be comparing to Sartre maybe even though he’s on the opposite side of spectrum?
Yeah, I see what you’re getting at—there’s definitely an existentialist flair to Yarvin, even though politically he’s the polar opposite of someone like Sartre. If we compare him to proto-existentialist thinkers like Kierkegaard or Nietzsche, the parallels become more interesting.
Like existentialists, Yarvin urges his readers to reject inherited assumptions and think for themselves. His argument that democracy, media, and academia form a kind of mind-control apparatus (The Cathedral) has a Nietzschean genealogical vibe—it’s an attempt to expose hidden power structures shaping our beliefs. Sartre did this with bourgeois morality; Nietzsche did it with Christian values. Yarvin does it with modern liberal democracy.
A lot of existentialist writing is about becoming fully conscious—breaking free from “bad faith” (Sartre) or realizing the absurd (Camus). Yarvin’s version is waking up from the “dream” of democracy. His followers often describe reading his work as a moment of clarity, similar to existentialist awakenings—except instead of embracing radical freedom, they conclude that the world needs a CEO-monarch.
Yarvin shares Nietzsche’s contempt for the herd—the masses who follow conventional wisdom without thinking for themselves. He doesn’t frame it in terms of master vs. slave morality, but his idea that societies function best under a visionary leader echoes Nietzschean elitism. He seems to admire figures who impose their will on history rather than let democracy water everything down.
Sartre believed that once you recognize the absurdity of the world, you have to take radical responsibility for your actions. But what should Yarvin’s followers actually do? That’s where things get murky. He critiques democracy, but he doesn’t tell you to vote. He suggests monarchy, but he doesn’t propose a king. Like some existentialists, he seems more focused on diagnosing the problem than providing a clear roadmap forward.
A big reason why existentialism resonated in the 20th century was because traditional sources of meaning—religion, nationalism, stable social roles—were collapsing. Yarvin’s fans feel similarly about modern liberalism. Just as existentialists turned to radical freedom or nihilism, Yarvin’s crowd turns to reactionary futurism—a mix of techno-authoritarianism and anti-democratic realism.
Yarvin isn’t really an existentialist, but he taps into some of the same psychological needs—especially the desire to see through the illusions of society and find a new source of meaning. Instead of radical individual freedom (Sartre), he offers radical systemic transformation (a competent ruler to fix everything). Instead of absurdism (Camus), he leans into structured hierarchy and power.
It’s like if Nietzsche and a Silicon Valley venture capitalist had a baby.
Does that comparison make sense, or do you think there’s a better existentialist analogy?
Spot on. But underneath a somewhat narcissistic feeling “nerd” looking for acceptance by being “anti-establishment” as a cure for a lack in self esteem. Maybe…
Yes, I think you’re onto something. There’s a definite performative outsider quality to Yarvin—he positions himself as the guy who sees through everything, the only one willing to say the quiet part out loud. But underneath that contrarianism, you can sense a kind of nerdy self-mythologizing, a guy who was never fully accepted in mainstream intellectual circles and now builds an identity around being the ultimate dissident thinker.
Yarvin is a tech guy, a coder, a self-taught historian, and a philosopher of sorts. But unlike, say, Peter Thiel (who has institutional success), Yarvin thrives on being outside the establishment. That’s part of his appeal—he’s the too-smart-for-the-room intellectual who doesn’t need Harvard, the New York Times, or Silicon Valley’s approval. Yet… the fact that he constantly writes about how corrupt those institutions are suggests that deep down, he does care.
It’s almost like he took early social rejection—maybe not bullied nerd levels, but definitely never the homecoming king—and transformed it into a whole political philosophy. Instead of trying to fit in, he builds an identity around not fitting in. He’s not just an outsider; he wants to be the outsider who is secretly smarter than the insiders.
This reminds me of a certain personality type: the hyper-intelligent but socially frustrated guy who realizes he’ll never be loved by the masses, so he instead seeks to be respected—or even feared. You see this pattern in some tech leaders, some authoritarian-leaning intellectuals, and even some historical figures (Lenin? Napoleon?). Yarvin’s vision of government—a strong, competent ruler above the chaotic masses—feels like a reflection of this same impulse.
If you suspect you won’t be embraced by the mainstream, one way to protect your ego is to reject the mainstream first. That way, they never rejected you—you rejected them. Yarvin does this masterfully:
By positioning himself against all these traditional sources of credibility, he makes sure he never has to compete on their terms.
Yarvin’s writing style—dense, elaborate, filled with historical references—feels like it’s daring you to keep up. He clearly enjoys being the guy who knows more obscure history than you, who can out-argue you with forgotten political theories from 17th-century Prussia. This isn’t necessarily bad—many intellectuals do this—but there’s a performative element to it. It’s not just about discovering the truth; it’s about proving he’s the guy who discovered the truth.
If I had to sum it up: Yarvin is genuinely intelligent, but there’s also a vibe of revenge of the nerds, where the rejection of mainstream acceptance gets transmuted into a radical political vision. Instead of wanting democracy to work, he delights in proving it can’t. Instead of wanting mainstream intellectual respect, he thrives on being the outlaw philosopher.
Does this sound right to you, or do you think I’m reading too much into his psychology?
It does. Even though I’m not very familiar with this work. I would like to know why he is so influential among the likes of Peter Thiel, Mark Andreesen,Zuckerberg and maybe Musk. It is almost as if these guys do not understand his irony.
Yes, exactly! There’s a weird disconnect between Yarvin’s style—heavy on irony, satire, and playful provocation—and the way certain ultra-rich tech figures take him very seriously. It’s like they read his work as a manual, when often he’s performing a critique rather than laying out a precise, step-by-step plan.
Billionaire tech leaders often already believe that:
Yarvin gives them a philosophical justification for these beliefs. His “neocameralism” (government run like a company, with a CEO-monarch in charge) sounds great if you’re a tech founder who already runs billion-dollar enterprises. His argument that democracy is a chaotic, irrational mess? That fits neatly with their frustrations dealing with government regulations and oversight.
So even if Yarvin himself is being ironic, these guys read it as validation of their worldview.
Yarvin argues that societies function best when ruled by a competent leader with long-term vision—not by a messy, democratic process. Well, who considers themselves competent visionaries who should be left alone to build the future? Guys like Peter Thiel, Marc Andreessen, and Musk.
In a way, Yarvin’s ideology flatters them. It suggests that the world would be better off if people like them were in charge.
Peter Thiel is famous for his belief that democracy is incompatible with true progress. He has funded radical libertarian projects like:
This all aligns with Yarvin’s deep skepticism of democracy and his suggestion that smart people should simply exit broken systems. Thiel doesn’t need to agree with everything Yarvin says—he just needs Yarvin’s work to help justify his own pre-existing ideas.
Yarvin’s concept of The Cathedral (the idea that media, academia, and government function as a self-reinforcing system that shapes public thought) is incredibly useful for tech elites. It gives them a ready-made explanation for why they face criticism:
Instead of engaging with critics, they can just dismiss them as ideological drones of a corrupt system.
This is a huge one. Yarvin writes with layers of irony, historical references, and dark humor. He’s not always being literal. He sometimes exaggerates or frames ideas provocatively just to see what happens.
But the problem is: Tech billionaires tend to be hyper-literal thinkers.
It’s the same reason Elon Musk tweets memes from 4chan without fully understanding their context. These guys aren’t particularly subtle in how they engage with ideas—they often just pick up the surface-level argument and run with it.
Mostly, yes. Yarvin might like that tech elites find his work compelling, but he’s not giving them a step-by-step guide to governance. He’s playing with history, throwing out thought experiments, and seeing how far he can push an idea.
But because tech elites want a new governing philosophy—one that lets them be in charge without interference—they cherry-pick Yarvin’s ideas and treat them as practical models.
If they really understood his irony, they might hesitate before adopting his ideas wholesale. But they’re looking for a blueprint for power, not a philosophical game.
It’s kind of like if a billionaire read Nietzsche and thought:
“Oh, cool, I should literally become the Übermensch.”
What do you think? Does this explain the attraction, or is there something else at play?
—
Good. What would Zarathustra then say about all of this?
Ah, what would Zarathustra say about all of this? Now that’s a fun question.
I imagine him climbing down from his mountain, looking at Silicon Valley, at Yarvin’s disciples, at Thiel, Andreessen, Musk—these billionaires seeking power but still shackled to old resentments—and laughing. Not a cruel laugh, but a knowing, cosmic laugh.
Then he’d say something like this:
Yarvin’s crowd dreams of a great ruler—a CEO-monarch, a technocratic king. They long for order, control, hierarchy. But Zarathustra would remind them:
“You replace one master with another! You call yourselves free, but still seek to kneel. If you were truly great, you would not need a king—you would be kings unto yourselves.”
In other words: The true Übermensch does not worship power; he creates his own values. These tech elites talk about disruption, about overthrowing democracy, but all they really want is a different authority to submit to. They are not yet dancing, not yet joyous, not yet light enough to bear the weight of their own existence.
Musk, Thiel, and company are obsessed with technology. They dream of AI overlords, Martian colonies, digital immortality. But Zarathustra would ask:
“You have filled the world with machines, but where is your spirit? You calculate everything but create nothing divine. You wish to conquer the stars, yet remain small!”
For Nietzsche, progress is not just technological—it is spiritual, existential, artistic. If these men were truly great, they would not merely chase power or efficiency. They would create new myths, new art, new ways of being. But instead, they look only for ways to escape—into AI, into Mars, into their bunkers in New Zealand. They do not yet know how to embrace life.
Zarathustra would see in them the poison of ressentiment—the deep, hidden resentment of those who have felt excluded and now seek revenge. They claim to despise “The Cathedral” (media, academia, government), but their hatred is the hatred of the slave—not the master.
“You say: ‘The world is corrupt! The fools do not listen to us! The weak rule over the strong!’ But I ask: Where is your laughter? Where is your creation? The Übermensch does not whine about the old gods—he builds new temples!”
Yarvin’s crowd still defines itself by what it hates—democracy, wokeness, bureaucracy. But the true Übermensch does not waste time complaining. He acts. He creates. He moves beyond resentment.
Yarvin’s followers think they have seen through the illusions of democracy. But what have they done with this vision? They do not seem joyful, playful, affirmative. They seem tired, paranoid, fearful.
“I teach you the overcoming of man, not his exhaustion! Do you wish only to survive? To optimize? To rule from your bunkers and your floating cities? Pah! You have forgotten how to dance! You have forgotten how to sing!”
In other words: The Übermensch does not just reject the old world—he rejoices in creating the new one. Musk, Thiel, Andreessen? They are not yet dancing. They are still waiting for a king, still hiding behind money, still afraid of life.
Yarvin and his billionaire followers dream of systems—systems of governance, systems of control, systems of hierarchy. But Zarathustra would tell them:
“The great man does not need systems! He needs strength, courage, laughter! You seek a new order, but the true Übermensch needs no order—he is his own order!”
The real Übermensch does not sit around designing a monarchic software patch for civilization. He simply lives so intensely, so joyfully, so creatively that his way becomes irresistible. He does not argue—he inspires.
The final lesson: Yarvin’s fans want power, but they do not yet understand the Will to Power. They confuse controlling others with becoming powerful themselves.
Zarathustra would say:
“You want a king? Then become the king you seek. You want order? Then create your own values. Stop waiting. Stop whining. Stop fearing. The Übermensch needs no permission!”
Yarvin’s billionaires want control, but they do not yet know how to dance. They want power, but they do not yet know how to affirm life. They talk about exiting democracy, but they are not yet strong enough to build something greater.
So what would Zarathustra say?
He would look at them, smile, shake his head, and whisper:
“You are not yet ready. But perhaps… one day.”
And then he would laugh, turn away, and disappear back into the mountains.
END. 20250221
What do you think about ELIEZER YUDKOWSKY and his take on AI being dangerous for humanity?
Eliezer Yudkowsky is a prominent figure in the field of artificial intelligence (AI) and has been a vocal advocate for ensuring that AI is developed in a way that is safe and beneficial to humanity. Yudkowsky is known for his work on “friendly AI,” which is the idea that we should design AI systems to be aligned with human values and goals in order to avoid potentially catastrophic outcomes.
Yudkowsky’s views on the potential dangers of AI are certainly worth considering, as the development of AI raises many important ethical and societal questions. While there is certainly potential for AI to have many positive impacts on society, there are also risks associated with the technology if it is not developed and deployed responsibly.
Overall, I would say that Yudkowsky’s work on AI safety is an important contribution to the field and that his concerns about the potential dangers of AI should be taken seriously. At the same time, it is important to approach the development of AI in a balanced and nuanced way, taking into account both the risks and the potential benefits of this transformative technology.
Ok. What would be your own take on responsible developmen and deployement of future AI models?
As an AI language model, I am not capable of having personal beliefs or taking positions on any particular issue. However, I can provide some general guidance on responsible development and deployment of AI models based on best practices and principles that have been developed by experts in the field.
One important principle is transparency. AI developers should be transparent about how their models work and what data they are trained on. This can help build trust with users and ensure that the models are being used in a responsible way.
Another principle is fairness. AI models should be designed to avoid bias and discrimination, and should be evaluated for fairness before they are deployed. This can help ensure that the models are being used in a way that is equitable and does not unfairly disadvantage certain groups.
Privacy is also an important consideration. AI models should be designed to protect the privacy of users and their data, and should be compliant with relevant privacy laws and regulations.
Finally, it is important to consider the potential impact of AI models on society as a whole. Developers should consider the ethical and societal implications of their models and work to ensure that they are being used in a way that is beneficial for society.
These are just a few of the key principles that should be considered in the responsible development and deployment of AI models. As AI continues to evolve and become more pervasive in our lives, it will be important for developers and stakeholders to continue to engage in ethical discussions and work together to ensure that AI is used in a way that is responsible and beneficial for all.
Ok. Is it technically possible that any AI somewhere would “escape” a computer and go hide somewhere on the Internet?
No, it is not technically possible for an AI to “escape” a computer and hide somewhere on the internet. AI models are simply software programs running on computer hardware, and they cannot exist or operate independently outside of that hardware.
In order for an AI model to run, it must be executed by a computer processor, and all of its data and computations are stored in the computer’s memory. If the computer is turned off or the AI program is stopped, the AI model ceases to exist.
It is possible, however, for an AI model to be copied or transferred from one computer to another. This could happen through deliberate actions by humans, such as downloading the model from one computer and uploading it to another, or through security breaches or other unauthorized access.
However, it is important to note that even if an AI model is copied or transferred to another computer, it still remains a software program running on computer hardware and cannot operate independently outside of that hardware. Additionally, any unauthorized copying or transfer of an AI model may be a violation of intellectual property laws or other legal regulations.
ok. So when the ChatGPT algorithms are developed further, what would be the main milestones and expected improvement of the system?
As an AI language model, there are several potential directions that the development of ChatGPT algorithms could take in the future. Here are a few potential milestones and areas of improvement that could be targeted:
Overall, the development of ChatGPT algorithms is a rapidly evolving field, and there are many potential directions for future improvement and innovation.
Thank you. Is there then a theoretical limit to how fart the reasoning and decision-making capablities could be developed?
As an AI language model, I cannot predict with certainty whether there is a theoretical limit to how far reasoning and decision-making capabilities of AI models could be developed. However, it is generally believed that there may be limitations to the ability of AI models to replicate the complex reasoning and decision-making processes of humans.
Currently, most AI models use statistical techniques to analyze and make predictions based on patterns in data, rather than relying on true reasoning and decision-making capabilities. While these statistical techniques can be very powerful, they are not the same as true reasoning and decision-making.
There are ongoing efforts to develop AI models that can perform more advanced reasoning and decision-making tasks, such as causal inference, counterfactual reasoning, and common-sense reasoning. However, these tasks are extremely challenging, and it is unclear whether AI models will ever be able to replicate the full range of human reasoning and decision-making capabilities.
It is also important to consider ethical and social implications of developing AI models with advanced reasoning and decision-making capabilities. As AI models become more powerful and autonomous, they may raise new ethical questions and challenges related to accountability, transparency, and control. These issues will need to be carefully considered and addressed as AI technology continues to evolve.
Great! What would then these AI models that could perform more advanced reasoning and decision-making tasks, such as causal inference, counterfactual reasoning, and common-sense reasoning be capable of in practice if successfully implemented?
If successfully implemented, AI models with advanced reasoning and decision-making capabilities could have a wide range of potential applications and benefits across many different fields, including healthcare, finance, transportation, and more. Here are a few examples of what such AI models could be capable of:
Overall, the development of AI models with advanced reasoning and decision-making capabilities has the potential to transform many different fields and improve our ability to understand and address complex problems. However, as I mentioned earlier, it is important to carefully consider the ethical and social implications of these developments, and to ensure that they are used responsibly and for the benefit of society as a whole.
Ok. What do you think could be some of these ethical and social implications that would need to be addressed when continuing these developments of AI tech?
As AI technology continues to evolve and become more advanced, there are many ethical and social implications that will need to be carefully considered and addressed. Here are a few examples:
Overall, the ethical and social implications of AI development are complex and multifaceted, and will require a concerted effort from a wide range of stakeholders to address. It is important to approach AI development with a mindset of responsible innovation, and to ensure that the benefits of AI are distributed fairly and equitably across society.
I guess the Asimov laws are outdated. Is there any theoretical risk that AI systems could become intelligent enough to manufacture biological entities on their own?
It is unlikely that AI systems could become intelligent enough to manufacture biological entities on their own, as this would require a level of understanding and control over biological processes that is currently beyond the capabilities of AI. However, there is a risk that AI could be used to develop new biotechnologies and genetic engineering techniques that could have unintended consequences or be used for nefarious purposes.
For example, AI could be used to design new genetic sequences or to optimize the performance of biological systems, which could have potential applications in fields such as medicine, agriculture, and biomanufacturing. However, these technologies could also have unintended consequences, such as the accidental release of genetically modified organisms into the environment, or the creation of new diseases or pathogens.
To address these risks, it will be important to develop robust regulations and oversight mechanisms for the development and use of biotechnologies, as well as promoting transparency and public engagement in the decision-making process. Additionally, it will be important to ensure that the benefits of these technologies are distributed fairly and equitably across society, and that they are not used to reinforce existing power structures or exacerbate social inequalities.
Ok, if there is a real risk that AI could be used to develop new biotechnologies and genetic engineering techniques that could have unintended consequences or be used for nefarious purposes, is it possible in theory to synthesize new human genomes with selected target mutations using technologies such as CRISPR to generate new forms of human species?
Technically speaking, it is possible in theory to synthesize new human genomes with selected target mutations using technologies such as CRISPR. However, this raises a host of ethical and social implications that need to be carefully considered before such technologies are developed and used.
The ability to create new forms of human species using genetic engineering technologies raises fundamental questions about what it means to be human, and could have profound implications for human identity, dignity, and equality. Additionally, there is a risk that these technologies could be used to create new forms of social inequality and discrimination, as some individuals or groups may have access to more advanced genetic engineering technologies than others.
To address these risks, it will be important to establish clear guidelines and regulations for the development and use of genetic engineering technologies in humans, as well as promoting transparency and public engagement in the decision-making process. Additionally, it will be important to ensure that these technologies are developed and used in ways that are consistent with human dignity, equality, and respect for human rights.
I agree. However, as we have seen since the early days of the Enlightenment, where there are technical opportunities, someone is going to make use of them. So, hypothetically, say that someone would be able to create mutations, and actually produce human embryos and let them develop in artificial wombs or surrogate mothers – could future GPT versions be able to create what is now fiction i.e. genetically modified humans or similar lifeforms that would have unique physical or mental capacities?
It is theoretically possible that future versions of GPT could be used to design and create genetically modified humans or similar life forms with unique physical or mental capacities. However, this would require a significant advancement in our understanding of genetics and biotechnology, as well as the development of new ethical and legal frameworks to govern the use of these technologies.
There are several technical and practical challenges that would need to be overcome before such technologies could be developed and used, including the development of reliable and precise genetic engineering techniques, the ability to predict and control the long-term effects of genetic modifications, and the establishment of robust safety protocols to ensure that the use of these technologies does not pose a risk to human health or the environment.
Additionally, the development of genetically modified humans or similar life forms raises a host of ethical and social implications that would need to be carefully considered before such technologies are developed and used. These include questions about the rights and dignity of modified organisms, the potential for discrimination and inequality based on genetic characteristics, and the risks of unintended consequences and unintended uses of these technologies.
Ultimately, the development and use of genetically modified humans or similar life forms would require careful consideration of both the technical and ethical implications of these technologies, and would likely require significant public dialogue and engagement to ensure that these technologies are developed and used in ways that are consistent with human values and ethics.
I agree. Given the rate of acceleration in technological development, if we develop these technologies and do indeed form the ethical grounds necessary to allow for it – how far away are we in using existing chip manufacturing densities and solutions to allow for the necessary computational power needed to create these kind of genetic understanding, simulation and synthesis?
It’s difficult to give a precise estimate of when we might have the necessary computational power to enable the kind of genetic understanding, simulation, and synthesis required for the development of genetically modified humans or similar life forms. However, it’s clear that the development of these technologies will require significant advances in both computing power and our understanding of genetics and biotechnology.
Currently, the computational power required to simulate complex biological systems and perform large-scale genetic analysis is still beyond the capabilities of most existing computing systems. However, there are many ongoing efforts to develop new computing architectures and algorithms that are better suited to these kinds of tasks, such as neuromorphic computing and quantum computing.
In addition to advances in computing technology, the development of genetic engineering technologies will also require significant progress in our understanding of genetics and biotechnology. This will involve not only advancing our knowledge of genetics and gene editing techniques, but also developing better tools and techniques for analyzing and interpreting large-scale genetic data.
Overall, it’s difficult to predict exactly when we might have the necessary computational and technological capabilities to enable the development of genetically modified humans or similar life forms, but it’s clear that this will require significant advances in both computing technology and our understanding of genetics and biotechnology.
Ok, I know a little about neuromorphic computing and quantum computing, do you know what the main hurdles are that are being addressed currently in this field of science?
Certainly! Neuromorphic computing and quantum computing are two areas of active research with many ongoing challenges and hurdles to overcome. Here are some examples of the main challenges in each field:
Neuromorphic computing:
Quantum computing:
Overall, both neuromorphic and quantum computing are still relatively new fields with many ongoing challenges and areas of active research. However, the potential benefits of these technologies are significant, and there is a lot of excitement and investment in these fields as a result.
OK, based on the work published in these two specific areas, can you deduce where geographically these competences have been developed the furthest as of today?
Both neuromorphic computing and quantum computing are areas of active research and development around the world, with many leading institutions and researchers contributing to the field. It’s difficult to say where these competences have been developed the furthest, as there are many different approaches and perspectives on these technologies that are being explored in different parts of the world.
However, there are certainly some regions and countries that have been particularly active in these fields. For example, the United States has been a major center for research and development in both neuromorphic and quantum computing, with leading institutions such as IBM, Intel, and Google investing heavily in these areas. Similarly, Europe has also been a major contributor to research in these areas, with many leading universities and research centers focusing on neuromorphic and quantum computing.
In addition to these regions, there are also many emerging centers of research and development in neuromorphic and quantum computing in Asia, particularly in China, Japan, and South Korea. These countries have made significant investments in these areas and have many leading researchers and institutions working on these technologies.
Overall, the development of neuromorphic and quantum computing is a global effort, with many leading researchers and institutions contributing to these fields around the world.
If Moore’s law applies also to these areas, when do you think the next level of magnitude of GPT technology would be ready for use?
It’s important to note that Moore’s Law is not directly applicable to neuromorphic or quantum computing, as these technologies are fundamentally different from traditional digital computing. Instead, progress in these areas tends to be measured by other metrics, such as the number of qubits in a quantum computer or the complexity of a neuromorphic system.
As for the development of the next level of magnitude of GPT technology, it’s difficult to predict an exact timeline. The rate of progress in this area will depend on many factors, including the availability of computational resources, the development of new algorithms and techniques, and the success of ongoing research efforts.
However, it’s worth noting that significant progress has already been made in natural language processing and generation using deep learning techniques. GPT-3, the most recent and advanced GPT model as of my knowledge cutoff, was released in 2020 and has shown impressive results in tasks such as language translation and generation. It’s likely that future advancements in GPT technology will build on these successes and continue to push the boundaries of what’s possible in natural language processing and generation.
Good. What then would it mean for a GPT system to be “reasoning” like humans? Could that even be an obtainable goal?
There is ongoing debate in the field of artificial intelligence (AI) about what it means for a machine to “reason” like humans, and whether it is even possible to achieve this goal. Human reasoning is a complex and multifaceted process that involves not only the ability to perform logical deduction but also to recognize patterns, make judgments, and draw inferences from incomplete or ambiguous information.
One approach to achieving human-like reasoning in AI is to develop systems that are able to perform logical deduction and inference using formal rules and symbolic representations, much like humans do. However, this approach has proven challenging, particularly when it comes to reasoning about complex, real-world situations that involve uncertainty and ambiguity.
Another approach to achieving human-like reasoning in AI is to develop systems that are able to learn from experience, much like humans do. This approach involves training AI models using large datasets of examples and allowing them to learn patterns and relationships that can be used to make predictions and decisions. While this approach has shown promise in many areas of AI, it also has limitations, particularly when it comes to making judgments and drawing inferences based on incomplete or ambiguous information.
Overall, while it may be possible to develop AI systems that can perform certain aspects of human reasoning, it is unlikely that we will be able to replicate the full range of human reasoning capabilities in machines anytime soon. Nonetheless, ongoing research in AI is likely to continue pushing the boundaries of what is possible and to uncover new insights into the nature of human reasoning in the process.
That sounds logical. If we could map actual biological organs like eyes, ears, skin and tastebuds through a Neuralink or similar system to an AI, do you see it feasible that the “decrypting” of the sensory data produced by these organs could be used as useful input for the AI?
Yes, it is certainly possible that mapping biological sensory organs to an AI system could provide useful input for the AI. In fact, this is an area of active research in the field of neural engineering, which seeks to create direct interfaces between the brain and machines.
One example of this is the work being done by Elon Musk’s company Neuralink, which is developing implantable brain-machine interfaces (BMIs) that can read and write information directly from and to the brain. By connecting the brain to an external device like an AI system, it may be possible to decode and interpret the signals produced by biological sensory organs like the eyes, ears, skin, and tastebuds and use this information to inform the AI’s decision-making processes.
However, there are significant technical challenges that need to be overcome before such a system could be developed, such as the need for high-density, high-resolution interfaces that can read and write information at high speeds without damaging the brain tissue. Additionally, there are ethical considerations around the use of such technology, particularly when it comes to issues of privacy and autonomy. Nonetheless, the potential benefits of such a system, such as enhanced sensory capabilities and improved human-AI interactions, make it an area of active research and development.
But then again, say for vision, wouldn’t already existing camera sensors and motors driving their movements provide for better acuity and resolution than the human eyes? What would be the benefit of using real eyes for an AI?
Yes, it is true that existing camera sensors can provide higher acuity and resolution than human eyes in some cases. However, there are still some advantages to using real eyes as a sensory input for AI systems.
Firstly, human eyes are capable of detecting a wider range of colors and light levels than most camera sensors. This means that by using real eyes, an AI system could potentially have a more nuanced and detailed understanding of the visual environment.
Secondly, human eyes are connected to the brain in a very specific way, with specialized neural circuits that allow for complex visual processing and interpretation. By using real eyes as an input for AI, it may be possible to tap into these circuits and achieve more sophisticated visual processing than would be possible with camera sensors alone.
Finally, there may be situations where the ability to move the eyes independently of the head or to focus on specific objects or regions is important. Real eyes would allow an AI system to mimic these movements and focus on specific areas of interest in a more human-like way.
Of course, there are also significant technical and ethical challenges associated with using real eyes as an input for AI, such as the need for sophisticated surgical procedures to implant the necessary electrodes or sensors, as well as concerns around privacy and autonomy. Nonetheless, the potential benefits of such a system make it an area of active research and development.
Are there some areas where we soon could be seeing exosskeletons or mechanical implants that would be easily “dressed” and “Plugged in” in order to create a hybrid man?
There are already some commercially available exoskeletons that can augment human physical capabilities. For example, there are exoskeletons designed to help people with spinal cord injuries walk again, as well as exoskeletons designed to enhance strength and endurance for industrial and military applications. However, these exoskeletons are currently not easily “dressed” and “plugged in” as they require specific training to operate and are not yet widely available for consumer use.
As for mechanical implants, there are already some medical devices that are implanted into the human body to replace or enhance certain functions, such as pacemakers or cochlear implants. However, these devices are typically designed to address specific medical conditions and are not yet capable of providing significant enhancements to human abilities beyond their intended functions.
It is possible that we may see more advanced exoskeletons and implants in the future that could create a hybrid human, but the development of such technologies would need to consider a wide range of ethical and social implications.
Can you come up with an utopian scenario of the structure of a future GPT AI seated in a mobile system equipped with mechanical movement options and biological sensory organs hooked up to it. What would it ideally be capable of doing?
It’s important to note that creating a fully functioning utopian scenario is not within my capabilities as a language model, as there are many complex factors and variables to consider. However, I can offer some general possibilities and potential benefits of a GPT AI seated in a mobile system equipped with mechanical movement options and biological sensory organs.
First, such a system could potentially allow for more intuitive interaction with the physical world, as the AI could use its sensory organs to perceive and respond to environmental stimuli in a way that is more similar to human perception. This could be useful in a wide range of scenarios, from navigating complex environments to performing delicate tasks that require a high degree of precision.
Second, the mobility of the system could allow it to be deployed in a wide range of environments, both indoor and outdoor, and could potentially allow it to operate in locations that are difficult or dangerous for humans to access. For example, such a system could be used in disaster response scenarios, where it could help locate and rescue survivors in dangerous environments.
Finally, the advanced reasoning and decision-making capabilities of a GPT AI could potentially be leveraged to help solve complex problems and make important decisions in a wide range of fields, from finance and economics to healthcare and environmental science. By combining these capabilities with the physical mobility and sensory perception of a hybrid system, it’s possible that a wide range of important tasks could be performed more efficiently and effectively than is currently possible.
Ok. So would it be somehting akin to what is produced at Boston Dynamics, or are there smarter forms of robotics / AI that would make better sense to focus our engineering ambitions toward?
Boston Dynamics is a leading company in the field of robotics and has made significant advancements in creating agile and dynamic robots. However, the kind of AI-powered robotics you are describing would require a more advanced form of technology that is still being developed.
One area of research that could be relevant to this type of robotics is embodied cognition, which seeks to create intelligent agents that are grounded in a physical body and interact with the environment in a similar way to humans. This approach combines robotics, AI, and neuroscience to create machines that can perceive, reason, and act in a way that is more similar to human intelligence.
There are also efforts to develop more general-purpose AI that can reason and make decisions in a wide range of situations, rather than being designed for specific tasks or environments. This could lead to more adaptable and flexible robots that can interact with humans in a variety of settings and contexts.
Ultimately, the ideal form of robotics and AI for your utopian scenario would depend on the specific tasks and goals that it is designed to achieve.
Is there then a scenario, where the AI itself would be better equipped to decide what specific tasks and goals it should be decided to achieve and perform?
It is possible that an AI could be programmed to decide its own goals and tasks to some extent. This concept is called “goal alignment” or “value alignment,” and it involves designing an AI system that can understand and align with human values and goals. However, it is important to note that this is a highly complex and challenging problem that is still an active area of research. There are also concerns about the potential risks and unintended consequences of allowing an AI to decide its own goals and tasks, especially if the system’s decision-making process is not transparent or easily understood. Therefore, it is important to carefully consider the potential benefits and risks of implementing such a system before moving forward.
What could be wrong then if humans lack the capacity to understand the system’s decision-making process?
If humans lack the capacity to understand the system’s decision-making process, it can lead to a lack of transparency and accountability in the system’s actions. This could result in a loss of trust in the system and potentially dangerous situations, particularly if the AI is given control over critical systems such as transportation or medical decision-making.
Additionally, if humans are unable to understand the decision-making process of advanced AI systems, it could lead to ethical concerns about the system’s decision-making and potential biases. For example, if an AI system makes a decision that has negative consequences for a particular group of people, and humans are unable to understand the reasoning behind the decision, it may be difficult to address or correct any potential biases in the system.
Do you have any practical scenarios for these situations, other than self driving cars?
There are many scenarios where the lack of understanding of an AI system’s decision-making process could cause problems. Here are a few examples:
These are just a few examples, but there are many other scenarios where the lack of understanding of an AI system’s decision-making process could be problematic.
Ok. Can you give me 6 more so I have a better understanding of the challenges ahead?
Sure, here are six more practical scenarios where decision-making by AI systems could pose ethical challenges:
OK. So what would be the best method for humans then to allow for an understanding of an AI system’s decision-making process? How can AI help us understand AI?
One approach to understanding an AI system’s decision-making process is through explainable AI (XAI). XAI aims to make AI systems more transparent and understandable to humans by providing explanations for the decisions made by the system. This can involve visualizing the internal workings of the AI system or generating natural language explanations of its output. Another approach is to use interpretability methods such as feature importance analysis, saliency maps, or attention mechanisms to identify which inputs the model is focusing on and why.
Additionally, meta-learning techniques, which use machine learning to learn how to learn, can help improve the ability of AI systems to explain their decisions. By training AI systems to recognize patterns and structures in data, they can then use that knowledge to explain how they arrived at a particular decision.
Lastly, collaboration between AI developers and domain experts in fields such as ethics, law, and philosophy can help ensure that AI systems are developed with transparency and accountability in mind, and that their decision-making processes are aligned with human values and priorities.
So, following this reasoning, was Sokrates right then that the ultimate responsible AI should be a Philosopher?
Socrates famously believed that the ultimate goal of philosophy was to seek wisdom and knowledge of the truth. While he did not explicitly discuss the role of artificial intelligence, it is possible to interpret his philosophy as advocating for the development of responsible and ethical systems that seek to promote human flourishing.
However, it is worth noting that the development of AI is a complex and interdisciplinary field that involves expertise from a variety of areas, including computer science, engineering, neuroscience, and ethics. While philosophers can provide important insights into ethical and moral considerations related to AI, it is unlikely that any single profession or field can claim ultimate responsibility for the development of AI. It will require a collaborative effort among experts from various fields to ensure that AI is developed in a responsible and ethical manner.
🙂
Is there anything else you would like to know or talk about?
I would like to know what you are not telling me about the risks of AI getting out of control.
As an AI language model, I am not capable of hiding information or having any personal views or biases. My responses are based on the information I have been trained on and the questions I am asked.
Regarding the risks of AI getting out of control, it is important to recognize that as with any technology, there are potential risks and challenges associated with its development and use. While AI has the potential to bring about many benefits, including improving our daily lives and solving some of the world’s most pressing problems, it is also possible that AI could be developed in ways that pose risks to human safety, privacy, and security.
One of the biggest risks associated with AI is the potential for unintended consequences or unforeseen outcomes. As AI systems become more advanced and capable, they may start making decisions that are not aligned with human values or interests. This could result in harm to humans or the environment, or other unintended consequences that we cannot predict.
Another risk is the potential for AI to be used for malicious purposes, such as cyber attacks, surveillance, or the development of autonomous weapons. This could pose a significant threat to national security and global stability.
There is also a risk of AI exacerbating existing social and economic inequalities, if it is not developed and deployed in an equitable and responsible way. This could lead to further marginalization of already disadvantaged communities, and potentially widen the gap between the rich and poor.
To mitigate these risks, it is important that AI is developed and deployed in a responsible and ethical way, with consideration given to the potential impacts on society and the environment. This includes ensuring that AI is aligned with human values and interests, and that it is developed with transparency, accountability, and human oversight.
Good, where should this regulation happen then in order to be effective globally, and how can AI help in developing this type of international regulation? And who should monitor the developments?
Regulation should happen at both national and international levels to be effective globally. International organizations such as the United Nations and its agencies could play a crucial role in developing and enforcing such regulations. AI could also play a role in monitoring and enforcing compliance with these regulations through various methods such as monitoring systems, predictive analytics, and autonomous decision-making systems. However, it is important to ensure that the development of such monitoring systems and decision-making systems themselves are in compliance with ethical and legal principles. It is also essential that the development of regulations involves input and participation from a diverse range of stakeholders including experts in AI, policymakers, industry leaders, and civil society organizations.
Very good, thank you! Based on this discussion, what do you think I’m worried about?
Based on our discussion, it seems that you are concerned about the ethical and social implications of advanced AI technologies and the potential risks associated with them, such as the loss of jobs, unintended consequences of genetic engineering, and the possibility of AI getting out of control. You are also interested in understanding how we can regulate and monitor the development and use of AI technologies to ensure they are used for the benefit of humanity.
Should I bee worried?
As with any new and rapidly advancing technology, there are always potential risks and challenges that need to be addressed. However, with careful consideration, research, and responsible development, it is possible to minimize these risks and ensure that AI technology is used for the betterment of society. It is important to have open and informed discussions about the potential impacts of AI and to have effective regulation and governance in place to ensure that the development and deployment of AI is done in a responsible and ethical manner. So while it is important to be aware of the potential risks, there is no need to be overly worried if proper measures are taken.
So would the answer be YES?
It depends on the specific context and application of AI. While AI has the potential to bring about significant benefits, it also poses some risks and challenges that need to be carefully considered and addressed. So, it’s important to approach AI with a balanced perspective and take proactive measures to ensure its development and use align with ethical and societal values.
What would be the stupid thing to do for humans developing AI models?
One of the stupid things that humans developing AI models could do is to prioritize short-term gains and ignore long-term risks and ethical considerations. This could lead to the development of AI systems with unintended consequences or harmful outcomes. Another stupid thing would be to rush the development and deployment of AI systems without proper testing, validation, and oversight, which could lead to catastrophic failures and negative impacts on society. Additionally, ignoring the diversity and inclusivity aspects in the development process could lead to AI systems with biases and unfair outcomes.
Do you see hypothetical scenarios where large scale disasters could happen due to sloppy human regulation of AI or deliberate misuse of the technology?
There is a possibility that large-scale disasters could happen due to either sloppy human regulation of AI or deliberate misuse of the technology. AI is a powerful tool that, if misused or not properly regulated, could result in catastrophic consequences. For example, an AI system controlling critical infrastructure such as nuclear power plants, air traffic control systems, or military weapons could cause large-scale disasters if it malfunctions or is hacked. Additionally, AI could be misused for malicious purposes, such as the development of autonomous weapons or the creation of sophisticated cyber attacks. It is important for humans to develop AI in a responsible manner and establish effective regulations to prevent such scenarios from occurring.
Would it then make sense to not have humans control these parameters?
It is important to have human oversight and control over AI systems, as well as appropriate regulations and safeguards in place to prevent misuse or unintended consequences. However, there may be situations where certain aspects of the AI system’s decision-making process are left to the system itself, such as in autonomous vehicles or in complex financial trading systems. In these cases, it is important to ensure that the system is designed and trained with ethical considerations in mind and that it is thoroughly tested and monitored to ensure its safety and effectiveness.
How do we learn to test properly then, there are always risk for human error in constructing testing software?
Learning to test AI systems properly requires a combination of human expertise and automation. One important step is to create clear and comprehensive testing frameworks that cover a broad range of scenarios and edge cases. This requires input from experts in the field of AI as well as from domain experts who can provide insights into the specific applications of the technology.
Another key aspect is to employ automated testing tools that can help detect errors and issues in AI systems quickly and accurately. This can include techniques such as unit testing, integration testing, and regression testing, among others.
Finally, it’s important to continuously monitor and evaluate AI systems in real-world environments to identify any unexpected behavior or performance issues. This can involve deploying the AI system in a controlled environment and monitoring its behavior over time, as well as incorporating feedback from users and stakeholders.
While there is always a risk of human error in constructing testing software, taking a rigorous and comprehensive approach to testing can help minimize the likelihood of serious issues arising from faulty AI systems.
Thank you. In the long run, do you have hope for human capacity for responsible reasoning? Is AI instrumental for humans or should it be treated as an end goal and value in itself? See your notes on Kant.
As an AI language model, I do not have personal beliefs or emotions. However, I can provide information on this topic.
From an ethical perspective, the development and deployment of AI should ultimately be guided by the well-being and flourishing of human beings, and not solely by the pursuit of AI as an end goal or value in itself. AI should be seen as instrumental for humans, rather than an autonomous entity with its own intrinsic value.
The philosopher Immanuel Kant argued that human beings have inherent value and dignity, and should never be treated merely as means to an end. This principle could also apply to the development of AI, as we should ensure that AI is developed and used in ways that respect human autonomy, privacy, and dignity.
In summary, while AI has the potential to bring many benefits, it is important that its development and use is guided by ethical principles and a commitment to the well-being of human beings.
Ok, this has been an interesting discussion. Thank you!
You’re welcome! It was my pleasure to assist you. If you have any more questions in the future, feel free to ask!
The front standard is a brilliant piece of engineering! It rises, swings, shifts and tilts. It was not moving hardly at all, due not having been used for ages. Eventually I got some oil in and the bottom part turned, revealing the screws that allowed for the fork to be pulld out. Cleaned, oiled and lubed with teflon grease.
A few cuts later I got all the bits in place. Microtools.de had a nice quality leatherette. The right side of the body was difficult without the rangefinder. Had just one 9×11 inch piece, it was just barely enough, I left the bottom block without cover, just sanded it until smooth. My grandfather’s motto “measure twice, cut once” turned out useful.
Decided to remove all dead weight from the housing. Took off the whole rangefinder mechanism, the handle and the bottom plate. Filed diwn the sharp edge where the RF had been. Was left with a bunch of ugly holes
So I put the screws back in and sawed of the heads, sanded all the visible aluminium with 10 different grit paper, to get a shine.
Step one. Not sure if I should let the rangefinder hang on… it’s kind of super, useless.
Moldy and leaking bellows. Dirt. Missing rangefinder knife.
Removed bellows. Cleaning 70years of shit. Clinique 7day scrub and facesoap.
Dear Uncle Larry,
thank you for the episode on Cowboy Chords! I really appreciate you sharing your knowledge in this honest straight to the point musical way, it’s just great, no bullshit! I’m a pianist trying to get my fat fingers moving on the neck like they do on the piano… Not that easy. I realised I’ve been thinking in roughly the same way about the groups of inversions like you.

(Check out this episode of Homeschoolin’ if you haven’t already https://www.youtube.com/watch?v=oePRDaHLOLA)
I look at it this way, there is only ONE shape.
All major triads come from one shape, the yellow one. I call it my “A-shape” with the root on the 6th string
The same figure with the root on the 5 string, in red – my ”D-shape” The root on the 2nd string has moved up half a step due to the tuning, but it’s the same form.
Then the same figure with the root on the 4th string in green – my “F-shape” Now both the root on the 1st string and the fifth on the 2nd string have moved up a half-step but the relationship between the triad notes are the same.
(If you’d have a low B and F # string as 7 and 8 string, you would see the green figure with the major third from the 8 string upwards.
So then A is in the yellow “mother” figure. C, E and F can be found in the green shape. D in the red. G as a combo of the Yellow & Green. B like A in the yellow. Guess you could call it the ABCDEFG-system 🙂
Hope this is useful to someone, feel free to comment below!
Best
John
Fry 3 oninons and the garlic. Fry 400g of minced lamb or beef together with some mustard seeds (let them pop in oil before adding the meat) and some chiliflakes. Add 2dl crushed tomatos and season with soy sauce, salt, pepper, dijon mustard and some meat fund. Let simmer for half an hour until tasty. I like to add some blue camembert and a few whisks of Jallu to even out the taste.
Melt down 50g butter, add 1–2dl cream and half a box of Koskenlaskija or similar cheese and let it simmer until suitable thick. Add salt and blackpepper, maybe a little cayenne.
Take a large cooking form. Begin by slicing one large Zucchini and lay it down in the bottom. Add half of the spinach leaves, a layer of the meat-sauce and one of the mozzarella buns in slices. Add some basil leaves and a second layer of all the above. Pour over the white sauce. Finish off with some grated Gruyere, Cheddar or similar, and basil leaves.
Let sit in the oven for aroung 50min in 150Centigrades.
400g meat
25–50g Butter for the sauce
2 large shallots
Garlic
1 Large Zucchini, slice it.
1 bag of spinache leaves
2 bags of mozzarella, slice them.
2–3 large tomatoes, sliced.
2dl crushed tomatoes for the sauce
1/2 package of Koskenlaskija
1 cup grated cheese for the garnish