A recent investigation has revealed that hundreds of open-source large language model (LLM) builder servers and dozens of vector databases are leaking highly sensitive information to the open web. As companies increasingly integrate AI into their business operations, they sometimes neglect to secure these powerful tools and the sensitive data they handle.
Legit Security researcher Naphtali Deutsch conducted a web scan, uncovering significant vulnerabilities in two types of open-source AI services: vector databases, which store data for AI tools, and LLM application builders, specifically the open-source program Flowise. The findings exposed a troubling amount of sensitive personal and corporate data that organizations inadvertently made public as they hurried to join the generative AI wave.
Deutsch highlighted the risk, noting that many programmers are quick to adopt these tools without considering the necessary security measures. “A lot of programmers see these tools on the internet, then try to set them up in their environment," Deutsch explained, "but those same programmers are leaving security considerations behind.”