This is an opinion piece by FRONTEO data scientist in response to the article “US and EU New Guidelines and Views on AI” published on October 20th.
One of the hallmarks of any new industry – especially one as nascent and cutting edge as AI – is rapid proliferation and innovation. This, in turn, makes regulating these industries difficult, since the regulatory frameworks tend to be left behind in the dust by the rapid improvement in the capabilities or reach of the underlying technology. There is no question that AI technologies – due to their ubiquitous presence and incredible potential for disruption in every sphere of life – need to be regulated, but the question is how much? Too much – and you risk stifling innovation at the root, too little – and the disruption that these technologies introduce to the social fabric may spiral out of control.
Focusing on Algorithmic Bias
Regulation proposed by the European Commission tries to tread the delicate balance between those extremes by focusing their regulatory lens on one subset of AI issues – the so-called “algorithmic bias”, and further narrows the bulk of its focus to the “high risk” areas – mostly areas concerned with equal access to fundamental societal benefits – education, justice, law enforcement and healthcare.
Unsurprisingly, a lot of the language in the proposed AI regulation revolves around the AI training process – vetting the data sets used for training and validation, ensuring that they are representative of the population and being able to provide transparency into it, if necessary, to the regulators. Just as importantly, the regulatory framework requires robust human oversight over the AI algorithms in these high risk areas. All of this underscores the absolute need for having trained data scientists on any team that employs AI-driven approaches to data analysis, since they are the ones that are ultimately best qualified to ensure compliance with these new regulations.
こちらは10月20日発表記事
”US and EU New Guidelines and Views on AI” に対するFRONTEOデータサイエンティストの意見記事です。
AIのような比較的新しい技術には規制が必要ですが、強すぎる規制はイノベーションの妨げになる恐れがあります。
欧州委員会(EC)が提案している規制では、バランスを取りながらAIの「アルゴリズム・バイアス」に焦点を当てています。教育、司法、医療といった社会の基本的なサービスへの平等さに関する問題も取り上げています。
正しいAI規制には、偏りのないAIトレーニングプロセスが大前提です。今後、訓練を受けたデータサイエンティスがAIアルゴリズムをしっかり監視することが必要となるでしょう。