Companies can provide cybersecurity training to their employees to make them aware of malicious messages that attempt to obtain personal information.
Companies should install multi-factor authentication tools to verify and confirm user identities, as well as cybersecurity software to protect against data theft.
Furthermore, they should constantly keep themselves informed about developments in the field of generative AI and the associated fraud opportunities (e.g. deepfake) and offer regular training on this topic.
The biggest ethical concerns are data protection and excessive trust in AI
When we ask about ethical concerns related to the use of generative AI tools, only 3% of study participants say they have no concerns. Among the rest of AI users, the three biggest concerns are: privacy and data security (50%), over-reliance on AI (47%) and job destruction (36%).
Organizations that rely solely on artificial intelligence to produce content need to be careful. Content can be labeled as impersonal, uncritical, and lacking in depth. This potential criticism can negatively impact brand image. Reputational impact is one of the risks faced by companies using generative artificial intelligence, a danger identified by 19% of respondents.
The information obtained through generative AI tools comes from unknown sources, which are used without verification and may be copyrighted. There is also a risk of plagiarism and misinformation. It is very important to check the results for errors or inconsistencies before using them. The dangers should be carefully considered not only by employees but also by company management, for example by setting clear rules on how generative AI should be used.
It is therefore crucial for companies to establish practices for the correct use of generative AI tools and ensure compliance with applicable laws. Management should provide a list of approved generative AI tools to guide employees in their approach and ensure that they use solutions that are compatible with the organization's needs. It is also important to train employees so that they can master the use of these tools and learn more about data confidentiality and ethical issues related to artificial intelligence.
Employees fear job losses
One can imagine that the automation that generative AI brings could cause certain jobs to disappear over time. This is a potential consequence that most of the workers surveyed are aware of. 36% of respondents said that job replacement is one of the biggest ethical concerns related to this technology. 39% are afraid that they could lose their job due to AI in the next five years.
56% of respondents believe that generative AI could replace 11-30% of their work today. It is noteworthy, howeve russia telegram data that few respondents estimate this figure to be above 50%. Could this be evidence that employees currently do not see generative AI as an immediate threat, but as a kind of right hand that helps them get some of their work done? Given the current capabilities of this technology, it is safe to assume that it will change the way we work, allowing us to spend less time on menial and repetitive tasks and focus more on more complex tasks.
For companies, automating certain tasks through generative AI could mean, among other things:
Cost savings as time-consuming and resource-intensive processes are eliminated,
greater inventiveness, as employees have more time to complete creative and innovative tasks,
a greater competitive advantage over organizations in the same industry that have not introduced generative AI into their working methods.
AI in companies: A technology with many advantages that needs clear framework conditions
There is no doubt that jobs will evolve with generative AI tools, and this shift can be seen as a pivotal moment in history. Using generative AI in the workplace has the potential to increase employee performance. Companies can re-evaluate tasks and functions so that employees can spend less time on tasks that can be automated and more time on complicated tasks. 42% of employees say that generative AI gives them more time to focus on higher-value tasks.
While companies are still researching and testing generative AI tools, a company policy can serve as a guide to outline best practices and potential impacts of misuse. 9 in 10 respondents believe there should be policies regulating generative AI tools in their workplace, 25% believe there should be any policy and 68% believe strict policies should be implemented.