Last week’s DEF CON hosted over at Las Vegas saw thousands of highly skilled IT engineers gathered together for one cause – to pave the way for an AI environment that is more inclusive and trustworthy.

NVIDIA Generative Team Red Challenge

Hosted by AI Village + SeedAI + Humane Intelligence, the Generative Red Team Challenge is blessed by the participation of NVIDIA themselves including experts and leaders. Several workshops and training sessions were engaged to learn and demonstrate the issues with the current generation of AI technology – way too many exploits.

The combined effort between hacker communities and enterprises further pushed the agenda of matching attention toward AI safety alongside technological advancement in the context of cutting-edge generative models.

Thus, the hacker community is in charge of bringing in skepticism, independence, and transparency to field-test emerging security standards.

Specifically, models created by Anthropic, Cohere, Google, Hugging Face, Meta, NVIDIA, OpenAI, and Stability plus Microsoft, will all be tested on an evaluation platform developed by Scale AI.

And as expected, Team Green’s NeMo Guardrails feature also made an appearance there for helping developers to guide and tune any generative AI within a specific set of boundaries for a more coherent, straightforward, and “in topic” generated output.

On the other hand, NVIDIA joined last week’s Black Hat conference aside from DEF CON where they provided a 2-day training session on machine learning utilization and some demo regarding web-scale training dataset poisoning on top of panel discussions for opinion sharing.

Organizers of the event are also planning to analyze and publish findings to the public – all for the sake to accelerate AI development and covering as much ground as possible.


Related Posts

Subscribe via Email

Enter your email address to subscribe to Tech-Critter and receive notifications of new posts by email.