2 Comments
User's avatar
Anshul Khare's avatar

An indirect privacy risk that companies are worried about is their employees running AI generated code and not being aware that a malicious library is siphoning off the sensitive information out from the employee's system. This risk was there earlier too but the guardrails were in place e.g. vetting the libraries that are allowed to be used. Now the surface area has increased because even non-tech staff can run AI generated code.

Expand full comment
Navin Kabra's avatar

This is a good point that I should have covered in my article. In general, use of AI increases the chances of malicious code, either because the AI ended up using a bad library, or Simon Willison's lethal trifecta: https://x.com/simonw/status/1934602159984984235

I do believe that the right approach to this is to strengthen the guardrails. Because 1. research shows that employees use LLMs for work whether it is allowed or not, and 2. as you pointed out, this can happen even without LLMs so having strong security processes is a good idea.

Expand full comment