Lauren Nignon
Employer
(63)
4
Companies
Categories
Information technology Marketing strategy Market research Media

Latest feedback

Achievements

Recent projects

Feminine Intelligence Agency
Feminine Intelligence Agency
New York, New York, United States

Policy Strategy for Combating AI-Assisted Coercive Control

The Feminine Intelligence Agency (FIA) is a research and public policy organization dedicated to advancing Social Discernment —the ability to detect manipulation early, respond effectively, and protect personal and collective agency in complex social environments. FIA is developing a comprehensive curriculum, scalable tools, and legal frameworks to address coercive control, emotional exploitation, and tech-enabled manipulation in relationships and digital life. Our work focuses on: 🧠 Social Discernment Curriculum : A new field of emotional intelligence that helps people recognize coercion, power games, and deceptive language before harm occurs. đŸ“± Player Identifier Chatbot : A predictive tool that analyzes early relationship patterns to help users recognize emotionally dangerous individuals. 🔍 AI Language Training App ( ChatBoy ) that teaches users to spot manipulative language across different contexts—dating, work, friendship, and family. đŸ§˜â€â™€ïž Peer-to-Peer Wellness Platform : A non-therapeutic, group-based support system designed for women navigating emotional burnout and frustration in high-conflict or confusing relationships. 🔐 Cybersecurity for Women : A decentralized, peer-led training program that focuses on emotional safety, social engineering, and digital resilience—especially for those targeted by psychological abuse or coercion online. FIA’s work bridges public policy, digital safety, and emotional intelligence to help individuals and institutions detect invisible threats and respond with clarity, ethics, and courage. This project explores how the United States might adopt policies to criminalize coercive control in intimate and institutional relationships, following the lead of countries like the UK, Scotland, and Ireland. Specifically, it will examine how emerging technologies—particularly generative AI—are amplifying patterns of manipulation, isolation, and psychological domination, a phenomenon we refer to as AI-Assisted Psychological Exploitation (AIPEx) . The student will assess: How Britain successfully passed coercive control laws (strategy, framing, resistance) Why similar efforts have struggled in the U.S. (cultural, legal, and political factors) How new threats—especially tech-enabled coercion—may change the narrative or legislative appetite The outcome will be a policy roadmap for how coercive control could be reframed, regulated, or outlawed in the U.S., including legal frameworks, political feasibility, and public engagement strategies. 📘 Policy Problem: The U.S. lacks legal frameworks to address non-physical forms of abuse like gaslighting, digital surveillance, emotional manipulation, and isolation tactics—despite their well-documented psychological and economic impacts. The problem is compounded by AI tools that are being misused to amplify these tactics at scale (e.g., deepfakes, real-time surveillance, chatbot mirroring, voice cloning). This project addresses both a market failure (unregulated tools enabling harm) and a government failure (lack of legal protections and prevention strategies for psychological abuse). đŸ§© Key Questions to Explore: What legislative, advocacy, and cultural strategies made coercive control laws pass in the UK and Ireland? What would a U.S.-specific policy roadmap need to look like—legally, politically, and culturally? Could technology-driven manipulation (AIPex) be a wedge issue for reform or regulation? What risks or resistance would this proposal face from political, tech, or legal sectors?

Matches 1
Category Gender studies + 2
Open
Feminine Intelligence Agency
Feminine Intelligence Agency
New York, New York, United States

Legal Pre-Exit Toolkit Development for Women in Coercive Control Situations

The Feminine Intelligence Agency (FIA) is a research and innovation lab focused on uncovering, naming, and dismantling hidden forms of social coercion. Our work spans psychology, AI, public health, and human rights—but at the center of it all is one urgent truth: coercive control is still legal in most of the world , and even in jurisdictions where laws exist, it remains nearly impossible to prosecute . Coercive control is a form of invisible violence —the manipulation, isolation, gaslighting, financial entrapment, and slow erosion of autonomy that occurs without physical bruises. Because it operates through covert aggression , it leaves women without the kind of hard evidence courts typically demand. Most survivors are told there’s “nothing they can do,” or worse—that it’s their fault. Even in the best-case scenario, these women are expected to gather and present their own evidence, interpret complex laws, and advocate for themselves—all while under psychological siege and often without money, legal support, or safety. That’s where this project comes in. We’re inviting your team to design a step-by-step legal preparation toolkit for women experiencing coercive control who may not be able to leave immediately—but who want to act intelligently and strategically now. The goal is to equip them with a concrete, discreet path forward: what to document, how to protect themselves legally and digitally, and how to begin building a trauma-informed case that courts might one day take seriously. Your research and insight—especially if it includes interviews with real attorneys and advocates —could help shape a resource that doesn’t exist anywhere else. With your help, we can begin translating constitutional protections and human rights principles into something women can actually use. This is not just a research exercise. It is an opportunity to close a devastating legal blind spot —and make a real difference in the lives of women navigating invisible forms of abuse with no roadmap, no allies, and no precedent.

Matches 2
Category Gender studies + 4
Open
Feminine Intelligence Agency
Feminine Intelligence Agency
New York, New York, United States

Cybersecurity Fair Pilot Readiness Assessment

The goal of this project is to help the Feminine Intelligence Agency (FIA) prepare its innovative Cybersecurity Fair platform for public launch. The Fair is an interactive, peer-learning environment where students teach each other practical cybersecurity micro-skills — a scalable, low-cost model to raise digital awareness and resilience across campuses. Student teams will focus on testing, improvement, and pilot design to ensure the platform is secure, effective, and engaging. Through hands-on evaluation and structured reporting, the team will connect cybersecurity principles with real-world educational deployment. Key Objectives: Conduct a cybersecurity and usability audit of the Cybersecurity Fair prototype, identifying vulnerabilities, privacy risks, and accessibility issues. Design and document a pilot launch plan , including test procedures, feedback collection, and key performance indicators (KPIs). Evaluate learning effectiveness — does the content help users understand emerging threats like AI-assisted psychological exploitation (AIPEx) and social engineering? Deliver an implementation readiness report summarizing system health, educational quality, and risk mitigation priorities. Develop a professional presentation package suitable for executive review and potential university partners. By the end of the project, students will produce a set of practical, adoption-ready recommendations that strengthen FIA’s ability to deliver cybersecurity literacy to thousands of college students, especially women, in a fast-changing digital threat landscape.

Matches 0
Category Security (cybersecurity and IT security)
Open
Feminine Intelligence Agency
Feminine Intelligence Agency
New York, New York, United States

Global Student Program Scalability Model

The Feminine Intelligence Agency (FIA) is a global research and innovation lab dedicated to advancing social discernment, digital safety, and emotional intelligence through AI-driven tools and peer-to-peer learning systems. FIA collaborates with universities, researchers, and students worldwide to tackle complex social and ethical challenges at the intersection of technology, psychology, and social systems. FIA seeks to enhance its global research and learning ecosystem by developing a scalable student infrastructure model. The current framework brings together students from diverse disciplines—including data science, psychology, design, and technology—to co-develop tools that address pressing social issues such as AI ethics, manipulation detection, and digital safety. This project invites MBA students to analyze the existing FIA student infrastructure and propose a scalable global model that ensures sustainability, operational efficiency, and cross-institutional collaboration. The final deliverable will serve as a blueprint for future FIA academic partnerships and global student programs. Objectives Assess the current FIA student collaboration model, including onboarding, task management, and deliverable tracking. Identify bottlenecks in communication, project continuity, and knowledge transfer between cohorts and partner institutions. Design a replicable, modular framework that enables seamless onboarding, collaboration, and research production across global teams. Recommend digital infrastructure and governance mechanisms (e.g., Notion, Slack, or LMS systems) to support scalability and institutional memory. Deliver a final proposal that outlines strategic, operational, and implementation plans for a sustainable FIA global education pipeline.

Matches 0
Category Organizational structure + 3
Open