The Global AI Explosion

AI creeps into everyday life

Artificial Intelligence (AI) has seamlessly woven itself into the fabric of our daily existence. From navigation systems like Google Maps that efficiently guide us from point A to B, to the imminent arrival of fully self-driving cars, AI’s influence is undeniable. For years, major media and social platforms have utilized AI to tailor advertising strategies based on our behavioural patterns, subtly shaping our lives via our digital experiences.

The 2024 AI Revolution

The year 2024 marked a pivotal acceleration in the AI discourse, with technologies like ChatGPT 3.5 emerging seemingly overnight. My engagement quickly evolved from curiosity to commitment, leading me to subscribe to ChatGPT 4.0 and experiment with a variety of other competing models. I see my children and their friends now using AI services daily in their school studies. The landscape has expanded rapidly to include AI-driven capabilities in voice synthesis, image generation, and video production, not to mention specialized tools for sectors like legal, medical, engineering, mining and software development.

Pioneers – Strategic Challenges

AI is based on Neural Networks which were first formulated in the 1940s. Since that time Neural Networks concepts have evolved, and proven to be powerful analytical tools. I recall in the 1990s the banking industry started using Neural Networks for fraudulent transaction detection, and geologists were experimenting with analysis of multiple layers of land and soil data.

The recent explosion in AI services was triggered partly by the availability of mass data, together with advances in computational power through improved Graphics Processing Chips which were efficient at processing neural networks, all working with the massive processing power available with Cloud processing infrastructure. But the central trigger was the development of a new type of Neural Networks known as Convolutional Neural Networks – CNNs. CNN’s are the foundation of Large Language Models, LLMs, such as ChatGPT and Gemini.

These new LLMs are neural networks, or CNNs that are trained with vast amounts of natural language data, possibly consuming all of the data on the Internet, and, controversially, including the massive video content on services like YouTube. Legal and Ethical questions about the use and ownership of data abound.

The massive infrastructure and energy requirements associated with developing and deploying artificial intelligence pose significant barriers to entry for new AI companies. In addition, the existing user base of the existing commercial giants in Internet Search, Social Media, mobile devices and Cloud infrastructure give those organisations huge, and possibly insurmountable advantages over new global entrants to the world of AI, with Tesla a notable exception.

OpenAI, co-founded by visionaries including Elon Musk, who reportedly contributed approximately $100 million in its early stages, is a testament to the hefty financial backing needed to jumpstart such advanced technologies. This initial funding was crucial in propelling the development of what would become foundational technologies like ChatGPT, but it was realised that billions of dollars would be needed annually to build the infrastructure needed to scale ChatGPT. Hence, the partnership of Open Ai with Microsoft (now 49% owner of Open AI), giving Open AI access to Microsoft’s global Cloud infrastructure.

Elon Musk was a cofounder of Open AI in 2015 but exited in 2018 venture and invested in Tesla’s massive Dojo processing infrastructure.

The Landscape of Chip Technologies

An AI processor can be thought of in simple terms as a spreadsheet, where the first column is the first layer of calculations. The next column has more rows etc. Data fills the first column, then is pushed to the next column based on applying rules and calculations to the first column values, then the next column and next column, dealing with millions or even trillions of cells across the model.

AI requires massive energy consuming infrastructure, which is essentially hundreds of thousands or even millions of chips designed to efficiently process data through AI neural networks. The common types or Chips are CPUs, GPUs and APUs.

CPU means Central Processing Unit. They are at the heart of PC’s. They are fast and great at processing sequential commands and well suited to PC’s.

GPU means Graphics Processing Unit. Innovation around graphics display has been led by Nvidia and driven by the television and gaming industries. Pixel colours are simple calculations, but millions of data values need to be ‘shot-gunned’ to the physical pixels rapidly.

APU is Accelerated Processing Unit. A type of processor assocatied the AMD manufacture that combines both a CPU (central processing unit) and a GPU (graphics processing unit) on a single chip.

Nvidia is a clear leader in AI chip technology, resulting from its experience and leadership in GPU chip innovation and AI accelerators. It has invested billions in its desire for global leadership in APU technologies.

The Hungry Beast: AI’s Demand for Power

Beyond hardware, AI’s appetite for energy poses a significant challenge. Some commentators estimate that processing a single image may take as much power as charging a mobile phone.

The massive electricity demands to power data centers are not just a logistical issue but also an environmental concern. As these intelligent systems become more integrated into our infrastructure, the quest for energy-efficient AI solutions becomes increasingly critical.

Big Players and Bigger Ambitions

Companies like Microsoft, Facebook, Open AI, AWS, Google, Tesla, and various emerging competitors are all striving for a slice of the AI dominance pie. Each entity brings unique innovations and approaches to the table, yet they share common hurdles: scaling technology sustainably, managing vast data requirements, and navigating the complex ethical implications of AI.

Data Ownership and Ethics for Model Training

A key dimension of data and AI model ownership and rights involves the ownership of the data used to train AI models. Vast amounts of data are gathered from the Internet. Also, many companies collect vast amounts of data through their operations and consider this information a competitive advantage. When such data is used to train AI models, questions arise about intellectual property rights. Who owns the model trained on proprietary data? How can the data be used without infringing on the original data owners’ intellectual property rights?

Consent and transparency are critical issues here. The General Data Protection Regulation (GDPR) in Europe and similar laws in other jurisdictions have started to address these concerns by enforcing rules around data collection and usage.

Addressing these issues is crucial not only for ethical reasons but also to foster public trust in AI technologies. Without clear guidelines and respectful practices regarding data ownership, the potential for misuse of AI is significant. Thus, as we advance in developing AI capabilities, parallel progress in ethical practices and legal frameworks is essential to harness the full potential of AI technologies while safeguarding individual rights and societal values.

Social and Market Ethics

The potential for AI-assisted internet searches to be manipulated or to favour a single vendor poses significant risks to market competition and consumer choice. When search algorithms are designed or tweaked to prioritize certain vendors, startups and smaller competitors may find it difficult to gain traction or even visibility in the marketplace. This can stifle innovation and limit the options available to consumers. The concern extends beyond just commercial implications; such manipulation could also influence public opinion and information access, thereby shaping societal norms and behaviors based on skewed data or commercial interests rather than genuine user needs and preferences.

AI systems can unintentionally perpetuate and amplify existing societal biases, including those based on race, gender, and socioeconomic status. Since AI algorithms learn from historical data, if this data contains biases, the AI’s decisions will likely reflect these prejudices. Personal data

Furthermore, there’s the risk of dependency where crucial decisions are deferred to AI, potentially diminishing human oversight and accountability. Ensuring that AI systems are transparent and explainable is vital to maintaining trust and understanding in AI applications.

The Dangers of AI to Humanity and Mitigation Strategies

The dangers of AI to humanity range from the immediate to the existential. At a basic level, poorly designed AI systems can malfunction, leading to accidents in scenarios such as autonomous vehicle crashes or medical diagnosis errors. At a more profound level, there’s the risk of AI being used in malicious ways, such as developing advanced cyber-attack tools or autonomous weapons systems, which could be difficult to counteract.

One of the most discussed existential risks is the scenario where AI surpasses human intelligence in a wide range of fields—a moment often referred to as the singularity. Post-singularity, AI could potentially operate with goals misaligned with human values and interests, leading to scenarios where AI’s optimization processes harm human welfare.

Mitigating these dangers involves several strategies:

  1. Robust AI Development: Building AI systems that are robust and reliable, capable of handling unexpected situations without malfunctioning.
  2. Ethical Frameworks and Regulation: Developing comprehensive ethical guidelines and regulatory frameworks to govern AI development and deployment. This includes international cooperation to manage and monitor AI developments globally.
  3. Transparency and Accountability: Ensuring that AI systems are transparent in their operations and that there are mechanisms in place to hold developers and operators accountable.
  4. Public Awareness and Education: Enhancing public understanding of AI and its impacts through education and open dialogue. This helps in democratizing AI and ensuring that its benefits and risks are widely understood.
  5. Alignment and Control: Researching ways to align AI’s goals with human values and developing control methods to ensure that even highly autonomous AI systems remain under human control.

 

Conclusions

ChatGPT 4.0 generated conclusion – Prompt: Write an exciting conclusion with some humour, to this article:

As we conclude our journey through the explosive world of AI, it’s clear that while the technology may be rocketing us into the future, it’s also planting a few banana peels along the way. From the omnipresence of AI in everyday tools like Google Maps to the grand visions of self-driving utopias, we are strapped in for a ride that’s as thrilling as it is unpredictable.

In 2024, AI didn’t just knock on the door; it blew it off the hinges. Technologies like ChatGPT 3.5 and its successors have waltzed into our lives with the swagger of a rock star, and they’re here to stay, turning sci-fi into everyday reality. It’s a revolution that has us toggling between awe and “oh wow” as we watch our children chat with AI for homework help—hopefully more accurate than the ‘help’ we got from Clippy back in the day.

Amidst the dazzle of possibilities, though, we’re also grappling with some gritty truths. The voracious power appetite of AI’s hungry beast of infrastructure, the towering barriers to new AI entrants, and the looming specter of ethical quandaries remind us that with great power comes great responsibility—and a hefty electricity bill.

The pioneers of AI, like the magicians at OpenAI and their wizardry at Microsoft, are boldly funding what no one has funded before. Yet, as we advance, we’re reminded that safeguarding individual rights and societal values is not just necessary but imperative to avoid turning those AI dreams into runaway nightmares.

So, as AI continues to redefine the possible, let’s ensure it remains our trusty sidekick, not our overlord. After all, we want AI to help us avoid traffic jams, not drive us into existential ones. And who knows? With a bit of luck and a lot of code, the future of AI might just be as bright as the screens we’re reading this on—just hopefully not as prone to freezing up at the worst possible times!Top of Form