As workplaces tighten regulations on how employees use platforms like ChatGPT and Google Gemini, thousands of Australian workers may be risking their jobs by using AI tools without being aware of their employer’s policy.
According to a survey, only roughly one in three employees who use AI at work are aware of their employer’s AI use policy and what it permits.
Nearly 20% of employed respondents, according to the survey use AI on a daily basis. Over 60% of respondents claimed they never use it at work.
“Breaking an AI policy would probably be treated like any other workplace policy breach,” Giuseppe Carabetta, an associate professor of workplace and business law at the University of Technology Sydney.
Using AI alone poses a greater risk, it involves using it in ways that reveal private or sensitive information, damage a business’s reputation or produce risky or unfair outcomes.
Data mistakes can escalate fast
Employers are cautious because of recent incidents, an Excel file containing over 12,000 rows of data was uploaded to ChatGPT by a former contractor.
It claimed that up to 3,000 people’s health and personal data might have been compromised.
Australia’s data protection regulator has also noted that once information about people is fed into publically accessible generative AI tools, it could be difficult to trace, control or delete it.
More Market News: Bitcoin halves from peak as crypto rout wipes trillions from market
This, it noted, could result in risks ranging from data breaches to harmful decision processes.
For workers, the advice is simple, just check what your organization permits, use permitted tools and avoid pasting such information into public AI tools if you are not sure.





