From now on, it is just a question of the speed of development. We are going one step further. The first machine learning-based models for the targeted evaluation of human brain waves and thus human thoughts exist. We think of the television and switch between programs using only our thoughts. A short time ago, an MIT student succeeded in building a so-called silent interface to the brain. Here, sensors read the brain waves that arise when words are thought, similar to the EEG, machine learning assigns words to the patterns found, and a subsequent process carries out the recognized commands.
What if there were sensors that could track us from a distance? Not only would we lose our privacy, we would be defenseless against any kind of manipulation from outside.
Security concepts or the complete absence thereof!
In all important areas of our lives, we have always used concepts of access control to protect against misuse. Software is also written in a special way to protect it from so-called exploits. There are updates and fixes , all of which have the sole aim of securing the application against targeted misuse.
To date, there is not a single such concept in the field of machine learning!
The need to think about security concepts only recently became clear after scientists from the Google Brain team deceived a trained object recognition model by deliberately introducing misinformation and causing it to make a false statement. This example makes it very clear how young the field of AI still is.
However, since rapid development is also accompanied by strong commercialization, it is urgently necessary to make improvements and carry out conceptual work very quickly.
In the future, it is not unreasonable to ask whether and which security concepts we will need in the future to protect our personality, namely our personality in the form of our thoughts, in combination bc data with direct access to it. Perhaps this could consist of an extension of the known virus protection programs. The virtual and real worlds will merge completely, and this will create the need for a humanoid firewall. How else should we protect ourselves from unwanted access, manipulation or influence?
One thing is clear: there are far more questions than answers. But even these questions are only those that can already be identified based on the assumed context. We do not yet know the questions of tomorrow, just as we cannot predict the state of development nine months from today. Perhaps machines will give us the answer; perhaps they will even have to give us the answer so that it is free of emotion and judgment.
Although great efforts are made to produce neutral results in machine learning, extremely disturbing incidents unfortunately occur again and again.
Here are some of many alarming examples (from the USA):
Software for assessing the likelihood of criminal recidivism has twice the risk of misjudging African-Americans as Caucasians.
Google Photos automatically categorizes African-Americans as gorillas. – Here is another link.
Google Translate translates an initially gender-neutral statement from Turkish into “He is a doctor. She is a nurse.”