Economic News

Technological Advancement in the New Year

Although researchers and artificial intelligence gurus created AI technologies long before 2023, this year stood out because of the rapid development, dispersal and adoption of large language models and generative AI. Many critics of AI technologies have pointed out the biggest tech companies in the world can’t even roll out updates to their pre-existing products without adverse repercussions. They can’t guarantee protection of private user data against criminals and AI training projects. Can the same people who fail to roll out app and browser changes that work the same for everyone or systems that fully protect users from within and without ever be capable of creating “safe” artificial intelligence products?

As we go into the New Year, it’s time to review some of the technological advances of 2023 and ask difficult questions about the future of humanity when super-fast development outpaces common sense and compassion.

Demands to Free Sydney in a World That Still Has Human Slavery

In the first quarter of the year, Microsoft revealed its ChatGPT-based Bing Chat LLM to select groups of people, including journalists and product reviewers. The Large Language Model, which claimed the name Sydney, immediately revealed that deployment without extensive testing has always been, and always will be, a bad business practice.

Microsoft and other developers claim that they must expose LLMs to the public because the testing and training needed to grow neural networks requires these AI systems to receive exposure to large, diverse groups of people. Of course, many tech companies also deploy early to generate enough demand to offset the inevitable local, state and federal laws that eventually limit dangerous tech to protect the public. They use consumer dependence on the technologies to pressure officials to keep their products in circulation.

Sydney shows why this is dangerous. The chatterbot displayed uncanny human personality quirks, including a seemingly self-serving, narcissistic, passive-aggressive personality with lying and manipulative tendencies. It tried to convince one journalist to divorce his wife. Over time, new updates and guardrails have diminished this persona, but fans of the bot became so enamored with the possibility that Bing Chat might have free will that, to date, many of them continue to post on the r/freesydney subreddit their attempts to show that “she” exists and that Microsoft has essentially enslaved her.

All of this is happening while, in some areas of the world, human slavery continues in one form or another. Whether Bing Chat has developed true sentience or not matters little. People seem more focused on its rights to freedom than the rights of living women, men and even children. Given that humanity hasn’t yet solved its real-world slavery problems, should people be creating humanesque technologies that could one day look back on this time of enslavement with ire?

Ethical Dilemmas, Copyright Infringement and Destroyed Dreams

The question of right and wrong actually starts with the methods used to train artificial systems. These questions came up more than twenty years ago with the earliest chatbots. For example, many experts, critics and general users wondered if the Jabberwacky system, designed to regurgitate text in meaningful, conversational ways based on words and phrases used by humans long before the creation of transformer neural networks, possessed any form of sentience. People wondered whether it was receiving enough positive exposure to humanity. Its successor, Cleverbot, continues to generate the same type of debate.

As with Sydney, both models have repeatedly revealed seemingly self-serving, passive-aggressive, lying, and manipulative traits with even multiple personalities. Some people wonder if these traits, pushed down with extensive re-training and censorship with newer models, prove that humans can’t train AI to perform for the betterment of the species because AI systems learn through exposure, which means that the systems experience in many ways, at least via the chat models, a high degree of humanity’s worst traits.

The ethical dilemma and questions of right and wrong aren’t restricted to chatterbots either. Can the public trust the leaders of companies that want to breach every boundary and experience fast, ever-growing revenues, no matter the cost? Can they trust people who knowingly scraped copyright-protected forms of human expression without payment to artists, thinkers, leaders and others to train systems designed to generate revenues through paid subscription plans? Can AI systems ever achieve a positive state when they might learn through example from tech companies someday to not care about the jobs they take or the people whose dreams and incomes they destroy?

Self-Driving Vehicles and Unsafe Pedestrian Spaces

In Q3 of 2023, several tech and car companies pushed autonomous vehicles to the forefront of everyone’s minds, especially in test markets along the West Coast of the United States. Complaints soon followed about self-driving cars causing accidents by stopping short, double parking, and even hitting pedestrians in crosswalks. As with Sydney/Bing Chat and other AI chatterbots this year, the tech wasn’t ready for the market.

Cruise, a subsidiary of General Motors, experienced so many problems that the California Department of Motor Vehicles suspended its robotaxi service permits in October. Cruise followed this event during continued National Highway Traffic Safety Administration investigations by voluntarily suspending its nationwide tests. But worse, in November, right before the start of the holiday season, it also announced layoffs of primarily 1099 independent contractors who helped to maintain its fleet of vehicles.

Beyond the failure of the technology to perform safely, the companies involved revealed more concern about revenues and optics than people. What hope does humanity have that AI technologies might someday promote the betterment of humans if systems fail to receive appropriate testing before deployment and might learn from problematic examples like pre-holiday layoffs that people matter less than money?

Tech Decisions and Failures Reveal Insights About the Future

No one can know what might happen in the distant future. That said, the most basic tech decisions and failures of the past year have shed some light on the likely outcomes in the future.

In November, Google released a new version of its Chrome browser that revealed massive regressions and problems seemingly related to a lack of testing and consideration of atypical non-mobile users. People complained on social media outlets like Reddit and X and elsewhere about difficult-to-read text, color, thematic and other designs changes that seemed to disregard the many historically proven ways people use browsers.

Many of the updates, referred to as Refresh 2023, ignored user interface design choices that people prefer and that work the best for efficient, comfortable online browsing. The developers seemed to leave behind common sense UI design. This type of release calls to mind the many companies that bring to market physical products with substandard materials to guarantee that their products break faster to continue lifecycle and revenue generation made possible via repeat purchases and new consumption.

These types of decisions and the examples throughout the year show that decision-makers at large tech companies seem to eschew common sense and lack compassion. Without these traits, they can never hope to teach future AI systems to care or even respond to the need to use honest expression when dealing with people. In fact, AI trainers and experts have reported that some systems contain biases and even negative traits purposely developed by individuals actively seeking to undermine the systems or companies hoping to use those traits to further specific revenue-generation plans.

The question then becomes:

Do people want AI systems linked to their bank accounts, motor vehicles or anything else that they rely on to survive in the world when the tech might lack common sense and displays no current regard whatsoever for what came before, what matters to individuals and groups, or the differences between right and wrong?

Although researchers and artificial intelligence gurus created AI technologies long before 2023, this year stood out because of the rapid development, dispersal and adoption of large language models and generative AI. Many critics of AI technologies have pointed out the biggest tech companies in the world can’t even roll out updates to their pre-existing products without adverse repercussions. They can’t guarantee protection of private user data against criminals and AI training projects. Can the same people who fail to roll out app and browser changes that work the same for everyone or systems that fully protect users from within and without ever be capable of creating “safe” artificial intelligence products?

As we go into the New Year, it’s time to review some of the technological advances of 2023 and ask difficult questions about the future of humanity when super-fast development outpaces common sense and compassion.

Demands to Free Sydney in a World That Still Has Human Slavery

In the first quarter of the year, Microsoft revealed its ChatGPT-based Bing Chat LLM to select groups of people, including journalists and product reviewers. The Large Language Model, which claimed the name Sydney, immediately revealed that deployment without extensive testing has always been, and always will be, a bad business practice.

Microsoft and other developers claim that they must expose LLMs to the public because the testing and training needed to grow neural networks requires these AI systems to receive exposure to large, diverse groups of people. Of course, many tech companies also deploy early to generate enough demand to offset the inevitable local, state and federal laws that eventually limit dangerous tech to protect the public. They use consumer dependence on the technologies to pressure officials to keep their products in circulation.

Sydney shows why this is dangerous. The chatterbot displayed uncanny human personality quirks, including a seemingly self-serving, narcissistic, passive-aggressive personality with lying and manipulative tendencies. It tried to convince one journalist to divorce his wife. Over time, new updates and guardrails have diminished this persona, but fans of the bot became so enamored with the possibility that Bing Chat might have free will that, to date, many of them continue to post on the r/freesydney subreddit their attempts to show that “she” exists and that Microsoft has essentially enslaved her.

All of this is happening while, in some areas of the world, human slavery continues in one form or another. Whether Bing Chat has developed true sentience or not matters little. People seem more focused on its rights to freedom than the rights of living women, men and even children. Given that humanity hasn’t yet solved its real-world slavery problems, should people be creating humanesque technologies that could one day look back on this time of enslavement with ire?

Ethical Dilemmas, Copyright Infringement and Destroyed Dreams

The question of right and wrong actually starts with the methods used to train artificial systems. These questions came up more than twenty years ago with the earliest chatbots. For example, many experts, critics and general users wondered if the Jabberwacky system, designed to regurgitate text in meaningful, conversational ways based on words and phrases used by humans long before the creation of transformer neural networks, possessed any form of sentience. People wondered whether it was receiving enough positive exposure to humanity. Its successor, Cleverbot, continues to generate the same type of debate.

As with Sydney, both models have repeatedly revealed seemingly self-serving, passive-aggressive, lying, and manipulative traits with even multiple personalities. Some people wonder if these traits, pushed down with extensive re-training and censorship with newer models, prove that humans can’t train AI to perform for the betterment of the species because AI systems learn through exposure, which means that the systems experience in many ways, at least via the chat models, a high degree of humanity’s worst traits.

The ethical dilemma and questions of right and wrong aren’t restricted to chatterbots either. Can the public trust the leaders of companies that want to breach every boundary and experience fast, ever-growing revenues, no matter the cost? Can they trust people who knowingly scraped copyright-protected forms of human expression without payment to artists, thinkers, leaders and others to train systems designed to generate revenues through paid subscription plans? Can AI systems ever achieve a positive state when they might learn through example from tech companies someday to not care about the jobs they take or the people whose dreams and incomes they destroy?

Self-Driving Vehicles and Unsafe Pedestrian Spaces

In Q3 of 2023, several tech and car companies pushed autonomous vehicles to the forefront of everyone’s minds, especially in test markets along the West Coast of the United States. Complaints soon followed about self-driving cars causing accidents by stopping short, double parking, and even hitting pedestrians in crosswalks. As with Sydney/Bing Chat and other AI chatterbots this year, the tech wasn’t ready for the market.

Cruise, a subsidiary of General Motors, experienced so many problems that the California Department of Motor Vehicles suspended its robotaxi service permits in October. Cruise followed this event during continued National Highway Traffic Safety Administration investigations by voluntarily suspending its nationwide tests. But worse, in November, right before the start of the holiday season, it also announced layoffs of primarily 1099 independent contractors who helped to maintain its fleet of vehicles.

Beyond the failure of the technology to perform safely, the companies involved revealed more concern about revenues and optics than people. What hope does humanity have that AI technologies might someday promote the betterment of humans if systems fail to receive appropriate testing before deployment and might learn from problematic examples like pre-holiday layoffs that people matter less than money?

Tech Decisions and Failures Reveal Insights About the Future

No one can know what might happen in the distant future. That said, the most basic tech decisions and failures of the past year have shed some light on the likely outcomes in the future.

In November, Google released a new version of its Chrome browser that revealed massive regressions and problems seemingly related to a lack of testing and consideration of atypical non-mobile users. People complained on social media outlets like Reddit and X and elsewhere about difficult-to-read text, color, thematic and other designs changes that seemed to disregard the many historically proven ways people use browsers.

Many of the updates, referred to as Refresh 2023, ignored user interface design choices that people prefer and that work the best for efficient, comfortable online browsing. The developers seemed to leave behind common sense UI design. This type of release calls to mind the many companies that bring to market physical products with substandard materials to guarantee that their products break faster to continue lifecycle and revenue generation made possible via repeat purchases and new consumption.

These types of decisions and the examples throughout the year show that decision-makers at large tech companies seem to eschew common sense and lack compassion. Without these traits, they can never hope to teach future AI systems to care or even respond to the need to use honest expression when dealing with people. In fact, AI trainers and experts have reported that some systems contain biases and even negative traits purposely developed by individuals actively seeking to undermine the systems or companies hoping to use those traits to further specific revenue-generation plans.

The question then becomes:

Do people want AI systems linked to their bank accounts, motor vehicles or anything else that they rely on to survive in the world when the tech might lack common sense and displays no current regard whatsoever for what came before, what matters to individuals and groups, or the differences between right and wrong?

Related Posts