AI and Ethics – Love or Hate?

Artificial intelligence is intelligence demonstrated by machines, as opposed to intelligence of humans and other animals. Founding father of AI, John McCarthy offered the following definition in his 2004 paper as “intelligent machines, especially intelligent computer programs” which attempt to understand human intelligence. Uses for AI technology include speech recognition, computer vision, translation between languages, as well as for mapping certain inputs. Simply put, AI is a field which combines computer science and robust datasets to enable problem-solving. 

Rapidly grown in sophistication, many have criticized AI for its unethical applications, especially as the tool takes on bigger decision-making roles in more industries. While AI tools are being used for a plethora of different applications — such as minimizing the lengthy and pricey trial-and-error phase of product development or following instructions in ChatGPT prompts while providing detailed responses — AI has also attracted severe criticisms. The viral chatbot, ChatGPT, has taken hefty criticisms for worsening human connection and communication, and has sparked fears over its potential for unethical uses in academia. In general, the consensus from many AI professionals is that there are just “too many chatbots and too few real-problem-solving applied AI solutions.”

But can AI be used…for good?

Well, in the short-term at least, the answer is yes. There are so many diverse applications of AI that to vilify the technology would be to miss out on all it has to offer. In a conversation with ChatGPT about AI, climate tech, and sustainability, Joel Makower, Chairman & Co-founder of the GreenBiz Group, aimed to let the technology “speak for itself.” Even in its early stages, ChatGPT claimed it could help process and analyze energy consumption and emissions data to reduce consumption and the associated emissions by monitoring the sustainability performance of suppliers, predicting when equipment will fail, and optimizing energy use in buildings and industrial processes.

In the medical field, health care experts are seeing a variety of uses for the technology, whether for billing and processing paperwork or expanding analysis of data, imaging, and diagnoses. For example, Aidence is a Dutch-based company that has streamlined AI for radiologists by improving diagnostics for the treatment of lung cancer, tailored around the needs and input of clinical specialists.

In efforts to reduce AI’s carbon footprint, Google’s DeepMind has created AI with the capability to save the amount of electricity it uses to power its data centers by a staggering 40 percent. There are hopes it could have huge potentials to apply at a larger scale of energy distribution. Beyond monitoring energy usage, some AI-optimized models offer other strategies for biodiversity protection, effective approaches to water pollution and scarcity, and creating more environmentally sustainable transportation networks. Similar to climate modeling, AI can also discover patterns in manufacturing processes or operations that can then forecast extreme weather events and natural disasters, enabling more proactive measures to be taken.

Additionally, a growing number of agricultural based AI initiatives are on the rise. AGEYE Technologies enables plant stress monitoring that “help(s) indoor farms produce consistent, highly optimized yields with increased sustainability and scalability.” A Belarusian startup, OneSoil, provides satellite imaging and analysis to remotely monitor crops, increase yields, and reduce seed and fertilizer costs for farmers. HelioPas AI, another European company, is transforming agriculture to understand how to increase yields long-term with less resources by providing the most accurate, reliable, and affordable data available.

Recently, the Future for Life Institute called for a temporary halt in development via their open letter, ‘Pause Giant AI Experiments’ which called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Published on March 22, the letter has received over 21,000 signatures in the hopes of refocusing, and “making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” The letter also urges developers to work directly with policymakers in order to accelerate the creation of a strong AI governance system. 

The US is home to a robust AI ecosystem in terms of size, funding, and global reach, with 40 percent of all AI companies based in the country. This is followed by China (2nd) and Israel (3rd) which house the next strongest AI ecosystems. With political willpower, adequate funding, private-public partnerships, and a clear strategy, a country can become an influential Artificial Intelligence player within years. Holding these parties accountable for understanding how AI is implemented into modern systems and what the repercussions are of using this technology may help establish the moral foundations of its ethical uses.