Home / Channel / General Tech / AI pushback in 2018 could slow developments for years to come

AI pushback in 2018 could slow developments for years to come

2018 held more marked developments and revelations concerning the field of AI than ever before, yet the public at large seems more concerned with its potential shortcomings than its inevitable benefits. These concerns are hardly unfounded yet could prove troublesome for the future development of machine learning processes. After all, stymying one of the most obvious leaps in human achievement prematurely could hurt our technological development for decades to come.

A year of developments, a year of disappointments

Putting a pin in the most exciting or worrying AI development of the past year proves difficult. Years of hard work and dedication have pushed the field to a point where we have begun bordering on some of our wildest sci-fi fantasies in the realm of computer generated faces, self-driving vehicles and various other implementations that promise to take us towards a brighter future.

Yet the very same year has seen equal pushback through the furthering of facial recognition and profiling software that led to at least one employee revolt at Google and a significant amount of public concern regarding its use, among other potential issues. A driverless vehicle test ended in a fatality, though the vehicle system in question has since returned to the road. Artificial images dubbed deep fakes threatened to cause political and social turmoil during the election season on top of its dubious alternative uses.

Society seems to have reached a point where it is beginning to understand the drawbacks that come with the benefit of computing power and machine learning algorithms working in tandem without strict regulation to guide it either ethically or otherwise. Things will have to change, but how effectively lawmakers and AI engineers work towards outlining their concerns and working towards sensible solutions could prove disastrous for a field that has just begun to explore the boundaries of its potential.

There's no small amount of good that can come from the automation provided by AI. Whether it helps someone make smarter diamond purchases or increase the accuracy of medical diagnoses, shutting down AI projects outright for fear of abuse is simply out of the question. We can't afford to lose such a valuable resource despite the risks, much like similar advances with the potential for misuse.

Why unbridled testing causes concern

When research and development is going well, the public as a whole likely won't see the small steps that lead to the final product. Conversely, a single negative result is more likely to draw interest due to the nature of how news is reported and what draws human interest. Combining these factors with a lack of critical oversight in certain departments may be to blame and to thank for how the current discourse on AI has been formed.

Take Amazon's recent tests with Alexa that led to the ever-helpful AI advising one user to kill their foster parents after quoting a Reddit comment out of context. To the average user, this sounds like a major failing of the technology combined with a potential public risk should it begin relaying harmful information to minors without a filter. To Amazon, occasional errors are simply part of the development cost for such a complex system.

Weighing the risks associated with such loosely regulated research is a shot that has been called almost entirely by those running the Alexa program. Maintaining public trust in a technology that will have massive social impact requires careful planning to step around issues such as these and experts have already called out for industry regulation and accountability before more serious incidents occur.

As the public becomes more keen on watching for these developments, they must then come to terms with the current state of AI and its approach to solving problems in ways that could appear malicious from the outset. Experts have discussed how AI may not be benevolent by default and shaping how machine learning solves various problems in a way that provides results without causing undue harm could be severely hampered by poor regulation.

Many outlets have tried their best to educate the public on the basic ideas of AI and its values. More accurately, they have attempted to discuss how the values of any artificial system relies on the intent of its creator and its rules. We're unlikely to die at the hands of an AI revolt, but negative progress into the field could be more dangerous than climate change if not handled with the gravitas it deserves. Some of these threats revolve around our uncertainty in how to respond to incidents if they never occur. No progress is made without some level of struggle and failure.

In a best case scenario, our understanding of AI and the usefulness of its implementation will continue to grow through the next few years as regulators make sensible, informed decisions on the restrictions placed upon those in control. Unfortunately we're more likely to see ill-planned reactions to individual incidents rather than a comprehensive plan for the whole picture unless those regulations enter the picture before another public incident brings around more cause for alarm.

Become a Patron!

Check Also

Elgato debuts plug-and-play ‘Neo’ streaming gear

The Elgato range of streaming gear is getting an upgrade this month. Today, the company …