The Responsibility of Creation

Playing God

The story of Frankenstein, written by Mary Shelley in 1818, is a cautionary tale about the dangers of scientific experimentation and the responsibility that comes with creating new life. Fast forward to the present day, and we are now faced with a similar situation with the development of artificial intelligence (AI) and Artificial General Intelligence (AGI). While there are certainly differences between creating a physical being like Frankenstein’s monster and building an intelligent machine, there are also some important similarities when it comes to the ethical and moral considerations involved.

First, let’s start with the similarities. Both creating Frankenstein’s monster and building AI involve the creation of a new life form. In the case of Frankenstein, the creature is made up of various body parts stitched together and brought to life through a mysterious scientific process. With AI, the creation process is more abstract, and instead involves designing and programming a machine to perform tasks. However, I would argue that the level of mystery in both creation arcs is very similar.

One major instance where reality mirrors art in this case, is the potential for unintended consequences. In Frankenstein, the monster’s creator, Victor Frankenstein, is horrified by the results of his experiment and is unable to control the monster. Similarly, there are concerns about the unintended consequences of AI, such as the possibility of machines developing biases or making decisions that are harmful to humans. This brings us to the alignment problem; one that has garnered lots of attention in the media recently with many AI leaders calling for the halt in further development from companies like OpenAI.

The Importance of Getting it Right

The imagery of responsible AI development reminds me of threading a needle. Meticulous and time consuming, narrow decisions made in the near future will require careful consideration of ethical and moral issues such as bias, transparency, and accountability.

So, what does responsible AI development look like? There are a few key principles that should guide the development of AI to ensure that it is safe, beneficial, and aligned with human values.

  1. Sustainability over speed: As covered in episode #371 of The Lex Fridman Podcast - Max Tegmark: The Case for Halting AI Development, maybe the most important discussion point in this debate revolves around the battle between personal financial gain and civilization sustainability. In capitalist driven societies like today, the early bird gets the worm. Further evidenced, Tegmark discusses humanity’s battle with “Moloch”, paying homage to Allen Ginburg’s poem that describes how greed leads to evoking arms races. This will likely lead to many AI labs prioritizing shipping product and getting to market over ensuring that this technology is capable of meeting our needs in the long run. This needs to be a conversation that both government leaders and entrepreneurs are prioritizing alongside building.

  2. Human-centered design: There is a massive (perceived) alignment problem between humanity and machine. AI should be designed to serve human needs and values, and should be developed in collaboration with stakeholders such as users, experts, and impacted communities. I will be writing a follow up blog post around this topic.

  3. Ethical considerations: Hmm, an ethics conversation in 2023? Exciting. And scary. Harvard Business Review published an article, Ethics and AI: 3 Conversations Companies Need to Have, that I have found interesting in terms of how we approach this topic. One of the biggest takeaways that I received from reading was the importance of involving many different perspectives to this debate. Obviously AI developers should consider the ethical implications of their work, including issues such as fairness, privacy, and accountability. However, developers and builders are not going to be able to solve this issue alone. It is important to bring in voices from realms like philosophy, law and compliance experts, and experienced founders.

  4. Transparency and explainability: Simply put, Artificial Intelligence, Machine Learning, and other technological tools are built on massive swaths of data. The importance of where that data comes from, how it was sourced, and how it might skew models will be a major point of discussion and debate over the next decade. At a bare minimum, AI systems should be transparent and explainable, so that users can understand how they work and make informed decisions about their use.

  5. Safety and reliability: AI systems should be safe and reliable, with appropriate safeguards in place to prevent unintended consequences. Additionally, AI systems should be continually monitored and evaluated to ensure that they are functioning as intended and to identify and address any issues that arise. I would assume that this is where legislation comes in, but I cannot begin to pretend that I know the right answer here. Much like David Sacks has argued for reformation of Section 230 of the Communications Decency Act of 1996 in regards to Social Media. I suppose there is a wide ranging need for modernizing our laws to catch up to the level of innovation we have achieved.

Conclusion

By following these principles, we can ensure that AI is developed in a responsible and ethical manner, and that it serves the needs and values of humans.

And by doing so, we can ensure that AI serves the needs and values of humans, rather than becoming a force that is beyond our control. Just as Victor Frankenstein learned the hard way about the importance of responsibility in scientific experimentation, we must also take care to approach AI development with caution and care. As always, I value opinion, and I believe that the more discussion and open discourse around these topics will lead to a more nuanced solution. So please, let me know what your thoughts.

Per aspera ad astra

Previous
Previous

AI-Driven Digital Transformation: A Comprehensive Guide for Enterprises (Blog Post)