2019 Internet Health Report: Let’s ask more of AI

Gris Global

Largest Internet Space for Collaborative Learning

The Internet

2019 Internet Health Report: Let’s ask more of AI

How can we as a public make choices and engage with technology we don’t yet understand?

2019 Internet Health Report: Let’s ask more of AIStefania Druga from Romania teaches artificial intelligence (AI) programming to children. As a researcher, she has also studied how 450 children in seven countries interact with and perceive connected toys and home assistants, like Amazon Alexa or Google Home.

Children can understand more than parents think, she says –– including that machine learning is limited by what training data you have to work with.

The philosophy behind the software she developed for teaching is that if children are given the opportunity for agency in their relationship with “smart” technologies, they can actively decide how they would like them to behave. Children gather data and teach their computers.

This simple approach is what we urgently need to replicate in other realms of society.

In order to navigate what implications AI has for humanity, we need to understand it –– and then decide what we want it to do. Use of AI is skyrocketing (for fun, as well as for governance, military and business) and not nearly enough attention is paid to the associated risks.

“Yup, it’s probably AI,” says Karen Hao’s back of the-envelope-explainer about any technologies that can listen, speak, read, move and reason. Without necessarily being aware of it, anybody who uses the internet today is already interacting with some form of AI automation.

Thought of simply, machine learning and AI technologies are just the next generation of computing. They enable more powerful automation, prediction and personalization.

These technologies represent such a fundamental shift in what is possible with networked computers that they will soon likely make even more headway into our lives.

Whether search engine results, music playlists, or map navigation routes, these processes are far from magical. Humans code “algorithms” which are basically formulas that decide how decisions should be automated based on whatever data is fed into them.

Where it begins to feel magical is when it makes new things possible. This Person Does Not Exist is a good example. If you visit the website and refresh the page, you will be shown an endless array of faces of people who never existed. They are images images that are generated at random by a machine learning algorithm based on a database of faces that do exist.

Look closely, and you will spot the errors –– ears that are crooked, hair that doesn’t fall naturally, backgrounds that are blurred. This Cat Does Not Exist is less convincing. The potential exists for either photo generator to improve with additional data and guidance. And the risks that such photos could be used to misrepresent reality also exists, even for such whimsical creations.

In recognition of the dangers of malicious applications of a similar technology, researchers from OpenAI sparked a media storm by announcing they would not release the full version of an AI technology that can automatically write realistic texts, based partly on the content of 8 million web pages. “Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” they wrote, calling it an experiment in “responsible disclosure”.

Such recognition of the faultlines and risks for abuse of AI technologies is too rare. Over the last 10 years, the same large tech companies that control social media and e-commerce, in both the United States and China, have helped shape the AI agenda. Through their ability to gather huge quantities of training data, they can develop even more powerful technology. And they do it at a breakneck pace that seems incompatible with real care for the potential harms and externalities

Amazon, Microsoft and others have forged ahead with direct sales of facial recognition technology to law enforcement and immigration authorities, even though troubling inaccuracies and serious risks to people of color in the United States have been rigorously documented and defended. Within major internet companies that develop AI technologies, including Amazon and Google, employees have sounded alarms over ethical concerns more urgently.

Company leaders deflect with confidence in their business models, hubris about their accuracy, and what appears to be ignorance or lack of care for the huge risks. Several companies,  including Axxon, Salesforce, and Facebook, have sought to allay concerns over controversies by creating ethics boards that are meant to oversee decisions.

Co-founder of the research institute, AI Now, Meredith Whittaker, calls this “ethics theater” and says there is no evidence that product decisions are run by them, or that they have any actual veto power. In an interview with Recode, Whittaker asked of the companies, “Are you going to harm humanity and, specifically, historically marginalized populations, or are you going to sort of get your act together and make some significant structural changes to ensure that what you create is safe and not harmful?”

As it happens, Google’s announcement of an ethics board backfired spectacularly in April 2019 and was dismantled after employee protests and public outrage about who had (and hadn’t) been asked to join. While the company has been vocal about establishing principles for AI, and has engaged in social good projects, it also has competing priorities across its many ventures.

What are real world ethical challenges these boards could tackle if they took Whittaker’s advice? One idea would be to question an everyday function billions of people are affected by. Google’s video platform, YouTube, is often said to be a “rabbit hole” –– endless tunnels leading from one video to another. Though YouTube denies it, research shows that content recommendation algorithms are fueling a crisis of disinformation and cultish behavior about vaccines, cancer, gender discrimination, terrorism, conspiracy theories and [add your topic].

Similarly, Pinterest and Amazon are also platforms that drive engagement by learning and suggesting new and engaging content. They experience variations of the same problem. In response to public scandals, they have each announced efforts to stop anti-vaccine content, but there is little evidence of any real change in the basic intention or function of these systems.

But it’s not just technology companies that need to be interrogating the ethics of how they use AI. It’s everyone, from city and government agencies to banks and insurers.

At the borders of nine European Union countries, an AI lie detector was tested to screen travelers. Systems to determine creditworthiness are being rolled out to populations in emerging markets in Africa and Asia. In the United States, health insurers are accessing social media data to help inform decisions about who should have access to what health care. AI has even been used to decide who should and shouldn’t be kept in prison in the United States.

Are these implementations of AI ethical? Do they respect human rights? China, famously, has begun scoring citizens through a social credit system. Chinese authorities are now also systematically targeting an oppressed minority through surveillance with facial recognition systems.

Where do we draw the line?

There are basically two distinct challenges for the world right now. We need to fix what we know we are doing wrong. And we need to decide what it even means for AI to be good.

Cutting humans out of government and business processes can make them more efficient and save costs, but sometimes too much is lost in the bargain.

Too rarely, do people ask, should we do this? Does it even work? It’s worth questioning whether AI should ever be used to make predictions, or whether we should so freely allow it into our homes.

Some of the most worst missteps have involved training data that is faulty or simply used with no recognition of the serious biases that influenced its collection and analysis.

For instance, some automated systems that screen job applicants consistently give women negative scores, because the data shows it’s a field currently dominated by men.

“The categories of data collection matter deeply, especially when dividing people into groups,” say the authors of the book Data Feminism, which explores how data-driven decisions will only amplify inequality unless conscious steps are taken to mitigate the risks.

It seems that if we leave it up to the nine big companies that dominate the field of AI alone, we raise the spectre of a corporate controlled world of surveillance and conformity –– especially so long as gender, ethnic and global diversity is also lacking among their ranks of employees at all levels of a company. Having engineers, ethicists and human rights experts address collaboratively how AI should work increases the chance for better outcomes for humanity.

We are merely at the beginning of articulating a clear and compelling narrative of the future we want.

Over the past years, a movement to better understand the challenges that AI presents to the world has begun to take root. Digital rights specialists, technologists, journalists and researchers around the globe have in different ways urged companies, governments, military and law enforcement agencies to acknowledge the ethical quandaries, inaccuracies and risks.

Each and everyone of us who cares about the health of the internet –– we need to scale up our understanding of AI. It is being woven into nearly every kind of digital product and is being applied to more and more decisions that affect people around the world. For our common understanding to evolve, we need to share what we learn. In classrooms, Stefania Druga is making a small dent by working with groups of children. In Finland, a grand initiative sought to train 1% of the country’s population (55,000 people) in the elements of AI. What will you do?

Credit: Internet Health Report

Comment here