Critical Vulnerabilities Expose Windows Hello Authentication on Popular Laptops: Research
Plus, Indie AI Tools: The Unchecked Frontier in Enterprise Security Threats
Blackwing Intelligence Uncovers Exploitable Weaknesses in Fingerprint Sensors of Dell, Lenovo, and Microsoft Devices
New research from Blackwing Intelligence has unveiled a series of vulnerabilities posing a significant risk to Windows Hello authentication on Dell Inspiron 15, Lenovo ThinkPad T14, and Microsoft Surface Pro X laptops. The flaws, identified in the fingerprint sensors manufactured by Goodix, Synaptics, and ELAN, could potentially bypass the authentication process.
Researchers Jesse D'Aguanno and Timo Teräs discovered weaknesses in the "match on chip" (MoC) fingerprint sensors, which integrate biometric management functions directly into the sensor's integrated circuit. Despite MoC preventing stored fingerprint data replay, it fails to stop a malicious sensor from falsely authenticating a user or replaying previously recorded host-sensor communications.
The vulnerabilities in the sensors, particularly ELAN, Synaptics, and Goodix, open avenues for adversary-in-the-middle (AitM) attacks and bypassing Secure Device Connection Protocol (SDCP) protections. ELAN's lack of SDCP support enables USB devices to mimic the fingerprint sensor and authenticate unauthorized users.
Synaptics' flawed implementation, turning off SDCP by default and relying on an insecure Transport Layer Security (TLS) stack, facilitates the circumvention of biometric authentication.
Exploiting Goodix's sensor involves taking advantage of differences in enrollment operations between Windows and Linux systems, leveraging cleartext USB communication and unauthenticated configuration packets to bypass authentication.
To address these vulnerabilities, the researchers recommend OEMs enable SDCP and conduct audits by independent experts on fingerprint sensor implementations. Despite Microsoft's efforts with SDCP, device manufacturers' misinterpretation of its objectives and the limited coverage of device operations leave substantial attack surfaces exposed.
This revelation echoes previous instances where Windows Hello biometric authentication was compromised, underscoring the need for continuous improvements and meticulous scrutiny of security implementations in biometric authentication systems.
As Employee Demand Soars, CISOs Wrestle with Risks Posed by Unsuspected AI Adoptions
The rapid adoption of AI tools by employees outside conventional review procedures is becoming a significant challenge for CISOs and cybersecurity teams, mirroring the historical dilemma posed by shadow IT in the SaaS landscape. With AI, the surge in employee-driven demand for tools, exemplified by ChatGPT's swift ascent to 100 million users, intensifies the pressure on security teams to accommodate this trend.
While studies highlight a potential 40% boost in productivity through generative AI, the urgency to fast-track AI adoption without proper scrutiny is mounting. However, succumbing to these demands introduces serious risks of SaaS data leakage and breaches, especially with employees gravitating towards AI tools developed by small entities and indie developers.
Indie AI startups, boasting tens of thousands of apps, entice users with freemium models and product-led growth strategies but typically lack the stringent security measures inherent in enterprise-grade solutions. Offensive security engineer and AI researcher Joseph Thacker outlines the risks associated with these indie AI tools:
Data Leakage: Generative AI tools have broad access to user inputs, leading to potential data exposure and leaks, as seen in the case of leaked ChatGPT chat histories.
Content Quality Issues: Large language models (LLMs) can generate inaccurate or nonsensical outputs (termed hallucinations), raising concerns about misinformation and ethical considerations.
Product Vulnerabilities: Smaller organizations behind indie AI tools often overlook addressing common product vulnerabilities, making them more susceptible to various attack vectors.
Compliance Risk: Non-compliance with established data privacy laws and regulations (like SOC 2 compliance) could result in hefty penalties for organizations using these tools.
Connecting indie AI tools to enterprise SaaS apps elevates productivity but significantly amplifies the risk of backdoor attacks. AI-to-SaaS connections, facilitated by OAuth access tokens, inherit lax security standards of indie AI tools, creating potential entry points for threat actors targeting sensitive data within organizational SaaS systems.
To mitigate these risks, CISOs and cybersecurity teams should focus on fundamental strategies:
Standard Due Diligence: Understand and review AI tool terms thoroughly.
Application and Data Policies: Establish clear guidelines on allowed AI tools and data usage.
Employee Training: Educate employees on risks and policy adherence.
Vendor Assessments: Scrutinize security measures and compliance of indie AI vendors.
Communication and Accessibility: Establish open dialogue and clear guidelines for AI tool usage.
Creating an environment where security is seen as a business enabler rather than a barrier is crucial for long-term SaaS and AI security. Aligning cybersecurity goals with business objectives fosters cooperation and compliance, reducing the chances of unauthorized AI tool adoptions that jeopardize SaaS security.
Jobs/Internships
Iterable - Senior Software Engineer, Backend (Ecosystems) - Fully Remote
Coupang - Staff, Backend Engineer (FTS-Data Science) - Seoul, South Korea · On-site
Ripple - Staff Software Engineer, Finance Engineering - Toronto, Canada · On-site
HashiCorp - Software Engineering Intern - Hybrid
Datto - Junior Software Engineer Intern - Sydney Central Business District, New South Wales, Australia · On-site
Thermo Fisher Scientific - Software Engineering Intern - Pleasanton, CA
Ubisoft - Machine Learning Engineer Assistant – Internship (6-month) February/March 2024 (F/H/NB) - Paris, Île-de-France, France