Speaking Engagement

Manish Agnihotri discusses security for LLM applications at ILTA Evolve

May 3, 2024

Written by

Manish Agnihotri

At ILTA Evolve, Coheso’s COO, Manish Agnihotri, contributed to the panel titled "Safeguarding Legal Tech: Navigating Security Challenges in LLM Applications".

The panel initiated discussions on how executives and legal leadership are responding to advancements in Generative AI and its impact on technology adoption. It extensively explored security considerations concerning LLM-based LegalTech applications, covering topics such as Prompt Injection, Input and Output validation, fine-tuning LLMs with proprietary data, and the evolving threat landscape with the utilization of multiple LLMs.

Agnihotri commented on the complexities and potential risks involved in fine-tuning AI models for legal tasks, advising cautious consideration before proceeding. Agnihotri also warned of increased security risks with LLMs, where manipulation of input and output could lead to vulnerabilities like privilege escalation and remote code execution. His comments were covered in the articles mentioned below.

Speaking Engagement

Manish Agnihotri discusses security for LLM applications at ILTA Evolve

May 3, 2024

Written by

Manish Agnihotri

At ILTA Evolve, Coheso’s COO, Manish Agnihotri, contributed to the panel titled "Safeguarding Legal Tech: Navigating Security Challenges in LLM Applications".

The panel initiated discussions on how executives and legal leadership are responding to advancements in Generative AI and its impact on technology adoption. It extensively explored security considerations concerning LLM-based LegalTech applications, covering topics such as Prompt Injection, Input and Output validation, fine-tuning LLMs with proprietary data, and the evolving threat landscape with the utilization of multiple LLMs.

Agnihotri commented on the complexities and potential risks involved in fine-tuning AI models for legal tasks, advising cautious consideration before proceeding. Agnihotri also warned of increased security risks with LLMs, where manipulation of input and output could lead to vulnerabilities like privilege escalation and remote code execution. His comments were covered in the articles mentioned below.

Speaking Engagement

Manish Agnihotri discusses security for LLM applications at ILTA Evolve

May 3, 2024

Written by

Manish Agnihotri

At ILTA Evolve, Coheso’s COO, Manish Agnihotri, contributed to the panel titled "Safeguarding Legal Tech: Navigating Security Challenges in LLM Applications".

The panel initiated discussions on how executives and legal leadership are responding to advancements in Generative AI and its impact on technology adoption. It extensively explored security considerations concerning LLM-based LegalTech applications, covering topics such as Prompt Injection, Input and Output validation, fine-tuning LLMs with proprietary data, and the evolving threat landscape with the utilization of multiple LLMs.

Agnihotri commented on the complexities and potential risks involved in fine-tuning AI models for legal tasks, advising cautious consideration before proceeding. Agnihotri also warned of increased security risks with LLMs, where manipulation of input and output could lead to vulnerabilities like privilege escalation and remote code execution. His comments were covered in the articles mentioned below.

Speaking Engagement

Manish Agnihotri discusses security for LLM applications at ILTA Evolve

May 3, 2024

Written by

Manish Agnihotri

At ILTA Evolve, Coheso’s COO, Manish Agnihotri, contributed to the panel titled "Safeguarding Legal Tech: Navigating Security Challenges in LLM Applications".

The panel initiated discussions on how executives and legal leadership are responding to advancements in Generative AI and its impact on technology adoption. It extensively explored security considerations concerning LLM-based LegalTech applications, covering topics such as Prompt Injection, Input and Output validation, fine-tuning LLMs with proprietary data, and the evolving threat landscape with the utilization of multiple LLMs.

Agnihotri commented on the complexities and potential risks involved in fine-tuning AI models for legal tasks, advising cautious consideration before proceeding. Agnihotri also warned of increased security risks with LLMs, where manipulation of input and output could lead to vulnerabilities like privilege escalation and remote code execution. His comments were covered in the articles mentioned below.

Related Articles

Stay connected with Coheso’s latest advancements.

Subscribe to Coheso Newletter