In our last three blogs, we provided a brief overview of the field of Artificial Intelligence and its impact on the world of work along with suggestions to help career counsellors respond to these innovations with their clients. While predictions suggest a massive immediate impact on the workforce, in this blog, we will discuss some issues that we think need to be addressed before implementation and that may even delay the use of some deep learning technology.
Predictions are that within five years, deep learning machines with the ability to mimic human cognitive functions will take over many thousands of jobs (1). Currently, these new innovations are being used in law enforcement, health care, scientific research and even determining what information we see on Facebook. Before these deep learning machines are deployed, there are several social policy and legal issues that need to be clarified (2). One issue focuses on the lack of transparency in the development of algorithms (3). Due to the layering of deep learning algorithms, as the machine processes larger volumes of data, the algorithms make connections between layers that help to make more refined decisions. Some developers have voiced concerns over whether the decisions made by these machines can be trusted due to the changes in the algorithms.
This issue of trust raises legal issues that have yet to be resolved by the courts (4). For example, if these machines make biased decisions resulting in a human rights discrimination against a candidate for not being short-listed for a job, the issue of who is responsible is raised. Is it the developer, the owner of the machines, or the machines? We suggest that before implementing such technology, policies and legal statutes need to be in place. For example, “can a machine be a legal entity much like a corporation?” Or, “what standards of security need to be demonstrated by the machines to ensure user privacy of information before they are deployed?” Such decisions and policies will help to prevent unnecessary legal disputes.
Additionally, there are indicators to suggest the public is already leery about robotic-made decisions, and we think this attitude will have a negative impact on bringing newer innovations online until testing demonstrates no biases or weaknesses in deploying them. For example, autonomous cars have been in the media for over a decade. Current research suggests that 94 % of US citizens know about these cars; however, 56% of them indicated they are not ready to ride in such vehicles citing a lack of confidence and trust in robotic decision-making and a mistrust in the general safety of the technology (5). To change these attitudes, industry has more development and promotional work to do before the public will use this technology.
With smart machines, there is the possibility of collecting large amounts of personal data from users that could be used for nefarious purposes. For example, one has only to look to Facebook as a social media platform and how foreign agents were able to use it to influence voters in the 2016 US federal election. There was a public outcry concerning the use of information obtained by Cambridge Analytica from millions of Facebook users by political parties to build US voter profiles (6). At this point, policies are in short supply to protect consumer information and to regulate accountability should breaches be made. With the use of deep learning machines and the possibility of personal data being collected, safeguards are needed to ensure confidentiality and protection.
Career counsellors can play a significant role in dealing with these concerns. They can be advocates for their clients by working on policy development committees concerning the deployment of smart machines in the economy. Career counsellors have ethical guidelines, which regulate the use and storage of their clients’ personal information. These guidelines would help in developing policies around the storage, use and dissemination of information collected by deep learning machines. Career counsellors, through their professional associations, can send briefs to major banks, food retail companies, insurance companies, medical corporations, professional associations, and politicians to express their concerns over the lack of parameters surrounding the use of deep learning machines. We think these endeavors will help to raise public awareness and develop policies and laws before deep learning machines become.
By Jeff Landine and John Stewart
Sources Used
-
AI, automation, and the future of work: Ten things to solve for (June 2018). Retrieved on August 26, 2019 at www.mckinsey.com/featured-insights/future-of-work/ai-automation-and-the-future-of-work-ten-things-to-solve-for.
-
Artificial intelligence and machine learning: Policy paper. Retrieved on August 1, 2019 at www.internetsociety.org/resources/doc/2017.
-
Gershgorn, D. (2017) AI is now so complex its creators can’t trust why it makes decisions. Retrieved on August 1, 2019 at www.qz.com/1146753.
-
Beauchemin, H. (2018). Key legal issues in AI. Retrieved on September 19 at https://www.stradigi.ai/blog/the-key-legal-issues-in-ai/#pll_switcher
-
Smith, A and M. Anderson. Americans’ attitudes toward driverless vehicles. Retrieved on August 1, 2019 at www.pewinternet.org/2017/10/04/americans-attitudes-toward- driverless-vehicles.
-
Cambridge Analytica and Facebook: The scandal and the fallout so far. Retrieved on August 1, 2019 at www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal fallout.html.
*The views expressed by our authors are personal opinions and do not necessarily reflect the views of the CCPA