top of page
Blue Engine
TheRoad Logo

TheRoad

Product Strategy. Hands-on Consulting.

Smart AND Secured? Think Again.

  • Writer: Yoel Frischoff
    Yoel Frischoff
  • 23 minutes ago
  • 12 min read

A safe combination lock
Have the code? Are you the only one?

Smart tangibles present enhanced utility, but also increased security, privacy and safety challenges.

How smart tangibles are susceptible to both edges of this blade, is yet to be fully understood, as this stands at the core of differentiation for companies like Apple, and conversely, weaponized - rightfully or not - by governments in their trade wars.


What Seems to be the Trouble?


A brief introduction: The OSI Model

The OSI (Open Systems Interconnection) model is a conceptual framework that standardizes the functions of a telecommunication or computing system into several abstract layers, where each serves specific tasks and communicates with its adjacent layers, enabling interoperable network communication across heterogeneous systems.


ISO 35.100 OSI model diagram
OSI Model (Adapted)

Developed in the late 1970's this model was specifically adapted for client-server computer networks, and proved suitable for the internet era and the reality of widespread personal computers connecting to remote servers, and later to cloud services.


Cybersecurity and the OSI Model


The layered structure of the OSI model also serves as a useful framework for understanding cybersecurity risks.


Each layer - from physical hardware to application logic - can be a vector for attack, and mitigation strategies are often layered accordingly, from firewalling at the network level to authentication and encryption at the application layer.


Common security threats
Common security threats. Source: Infosectrain


OSI Model Adaptation to IoT


Internet of Things (IoT) category of products introduces increased complexity as diversified hardware, connectivity, and interaction layers are introduced. This is captured in a slightly different OSI model, showing how devices, network infrastructure, protocols, and visualization tools interact in a layered stack - from raw hardware at the bottom to user-facing applications at the top.

IoT adaptation of OSI model
IoT adapted OSI model, by Paul Refalo

Crucially, the IoT paradigm necessitates special attention to the hardware layer, as it consists of a vast variability of use cases, and to the application layer, as it is now split between the edge device and the web:

  • Inputs (keyboards and mice give way to other input devices)

  • Outputs (Screens vary wildly or are missing altogether)

  • Energy supply (the default AC mains give way to PoE, batteries, solar...)

  • Sensors and actuators (to some extent, the core purpose of this whole category)

  • Application (split between the device, the cloud, and web interface)


Expanded layers of OSI for smart tangibles
Expanded layers of OSI for smart tangibles

New Structure - New Threats


Looking at smart and IoT products and their version of OSI structure, it becomes clear that unique vulnerabilities emerge at the embodiments of the physical layer, as well as at the extension to the cloud and beyond.


Cyber attack vectors examples:


  • Physical Layer: Tampering, side-channel attacks, sensor spoofing, power supply manipulation (battery draining, over-voltage), environmental sabotage (e.g., heat or vibration).

  • Data Link: Eavesdropping on unencrypted RF signals (e.g., BLE, Zigbee), MAC spoofing, jamming, replay attacks over local wireless protocols.

  • Network Layer: Address spoofing (e.g., IPv6-related), insecure mesh routing, location tracking via IP leaks, exposure from gateway misconfiguration.

  • Transport Layer: Exploitation of lightweight protocols (e.g., CoAP, MQTT) with limited handshake/authentication, man-in-the-middle (MITM) attacks on UDP-based communication.

  • Session Layer: Session hijacking due to weak or absent session management in embedded systems, protocol downgrade attacks.

  • Presentation: Malformed data injection in minimal-format environments, encoding exploits in constrained parsing modules.

  • Application: Unauthorized remote control, firmware over-the-air (FOTA) hijacking, malicious update injection, insecure API endpoints, data leakage from sensor payloads.


Notorious Smart Products Cyber Attacks:


  1. Tesla Keyless Entry App Hijack (2016-2024)


    Tesla keyless phone app
    Tesla keyless phone app. Source: Taslem

What happened: Security researchers exploited flaws in Tesla’s keyless entry system via the app’s Bluetooth Low Energy (BLE) connection. Using a relay attack, they tricked the vehicle into thinking the authorized phone was nearby and gained unauthorized access.


  • Smart elements exploited: BLE-based proximity sensing App-device pairing

  • Impact: Demonstrated how over-the-air convenience features can become physical vulnerabilities.


Several attack vectors were exploited and reported in world press:

  • The Guardian, Sep 2016: Team of hackers take remote control of Tesla Model S from 12 miles away

  • Auto Evolution, May 2024: Researchers Discover That Teslas Are Easy To Steal Despite Adopting New Keyless Tech

  • The Byte, Nov 2024: Teslas can be stolen by hijacking WiFi at charging stations, researchers find

  1. St. Jude Medical (Abbott) Pacemaker Vulnerabilities (2017)


Pacemaker by Abbott
Pacemaker by Abbott

What happened: Security researchers and later the FDA confirmed that pacemakers and defibrillators from St. Jude could be remotely hacked via their wireless telemetry.


  • Smart elements exploited: Wireless firmware updates and remote monitoring systems.

  • Impact: A recall and public awareness campaign; over 400,000 devices were involved.


More about it:

  1. Jeep Cherokee Hack by Charlie Miller and Chris Valasek (2015)


Andy Greenberg/WIRED
Andy Greenberg/WIRED

What happened: Researchers remotely took control of a Jeep through its Uconnect infotainment system, affecting steering, brakes, and transmission.


  • Smart elements exploited: Cellular connectivity, software update access, and infotainment integration with vehicle control systems.

  • Impact: Fiat Chrysler recalled 1.4 million vehicles.


More about it:

  • Wired Jul 2015: Hackers Remotely Kill a Jeep on the Highway - With Me in It


  1. Mirai Botnet Attack on IoT Devices (2016)



screenshot of malware code
Screenshot / Tech Crunch

What happened: Malware infected hundreds of thousands of IoT devices (cameras, routers, DVRs) using default credentials, forming a massive botnet.


  • Smart elements exploited: Poorly secured, internet-facing devices with hardcoded or default credentials.

  • Impact: Massive DDoS attack on DNS provider Dyn, disrupting services like Twitter, Netflix, and Reddit. Hackers used the botnet to send a 620 Gbps DDoS to KrebsOnSecurity, a popular security blog by Brian Krebs.


More about it:

  • Tech Crunch Oct 2016: Hackers release source code for a powerful DDoS app called Mirai


  1. Ring Doorbell Camera Hacks (2019)


Ring doorbell camera
Ring doorbell camera / abc news

What happened: Multiple users reported attackers gaining access to Ring camera feeds and speaking through them. In some cases, attackers taunted children.


  • Smart elements exploited: Weak passwords, lack of two-factor authentication.

  • Impact: Public backlash; Ring added mandatory 2FA and improved security messaging.


More about it:

  • ABC News, Dec 2019: Ring security camera hacks see homeowners subjected to racial abuse, ransom demands


  1. Flaws in Smart Locks (Various, 2016–2020)




What happened: Several smart locks (e.g., Tapplock, August, and others) were shown to be easily defeated via Bluetooth sniffing, weak encryption, or physical bypass.


  • Smart elements exploited: BLE communication, cloud management interfaces, firmware updates.

  • Impact: Erosion of consumer trust in smart home security products.


More about it:

  • The Verge Jul 2018: This fingerprint-verified padlock is extremely easy to hack

  • BGR Aug 2016: Researchers find ‘smart’ door locks are easy to hack, surprising no one


  1. Nest Thermostat Ransom Attack (2019)



Image / Trend Micro
Image / Trend Micro

What happened: Some Nest users reported their thermostats being locked or set to extreme temperatures by hackers.


  • Smart elements exploited: Credential stuffing from previously breached usernames/passwords.

  • Impact: Google advised on stronger passwords and 2FA.


More about it:

  • Trend Micro Sep 2019: Hacker Compromised Family’s Wi-Fi, Taunted Family With Thermostat, Camera for 24 Hours


What's To Be Done?


As a product manager specializing in smart tangibles, Security (with a capital S) is yet one more system requirements that should be taken into account, with minimal friction through user experience. While most aspects would be baked into hardware and backend levels, some will be externalized to the users, especially at the authentication layer.


Here are some recommendations, though your milage may vary:



  1. Designing for Resilience

  • Principle:

    Smart tangible products should assume hostile environments and users.


  • Practices:

    Threat modeling early in the product design; designing with fail-safes, redundant paths, and hardware-level integrity checks.


  • Examples:

    Physical anti tamper on higher security devices provide visible evidence to enclosure tampering. this may include adhesive graphic scratch off tapes, labels, wax seals (just as in the Roman Empire), and single use snap-off bands.





  1. Mandating Secure Authentication Defaults

    Mandating Secure Authentication Defaults
    Image / Palmetto Security Group

  • Principle:

    Smart devices must not rely on insecure, hardcoded, or shared default credentials. Authentication mechanisms should be resistant to common attack patterns, anticipating real-world user behavior and adversarial access attempts.


  • Practices:

    • Eliminate universal or default login credentials before shipping.

    • Require users to create strong, unique passwords during initial setup.

    • Enforce two-factor authentication (2FA) where remote access is available.

    • Use account lockout and rate-limiting to prevent brute-force attacks.

    • Restrict unauthenticated network access by default.

    • Secure local interfaces (e.g., Bluetooth, USB, debug ports) with user consent prompts or authentication.


These practices are aligned with FTC guidance, which stresses limiting unauthorized access by requiring authentication, limiting failed attempts, and logging authentication events to monitor for anomalies.


Examples:


  • Roku mandated 2FA in 2024 following a breach impacting over half a million accounts, enhancing login security across its ecosystem.

  • Ring added mandatory 2FA and better authentication UX only after significant public backlash in 2019, showing the pitfalls of reactive security design.

  • Nest (Google) experienced credential-stuffing attacks in 2019 due to password reuse, highlighting the need for built-in safeguards like breached credential detection.

  • Ezlo Smart Home proactively adopted multi-factor authentication as part of its onboarding and account setup flow, helping prevent unauthorized control of home devices.



  1. Security by Update


    Apple’s Secure Enclave
    Apple’s Secure Enclave / Apple
  2. Principle:

    Devices must be designed to accommodate secure, ongoing updates to fix vulnerabilities discovered post-deployment.


  • Practices:

    • Implement secure update mechanisms using digitally signed firmware.

    • Prevent rollback to older, vulnerable firmware versions.

    • Notify users when updates are available and explain what changes are being made.

    • Design fail-safes to recover from interrupted or failed updates.


  • Examples:

    • Apple’s Secure Enclave ensures firmware updates are signed and authenticated before installation.

    • Google Nest devices use over-the-air (OTA) encrypted updates with user transparency.



  1. Standardization and Regulation

US Cyber Trust Mark
US Cyber Trust Mark

  • Principle:

    Regulatory frameworks help unify baseline security expectations and provide consumers with trust signals across products.

    Relying on established standards, manufacturers can accelerate innovation while drawing on best practices that also reduce legal risks.


  • Practices:

    • Align development practices with recognized IoT security standards.

    • Participate in voluntary labeling programs to signal compliance.

    • Design for transparency and auditability in regulated environments (e.g., healthcare, automotive).



  • Examples:

    • ETSI EN 303 645: This is a globally applicable standard for consumer IoT cyber security. It covers all consumer IoT devices while establishing a good security baseline..

    • US Cyber Trust Mark: A voluntary FCC-led labeling initiative launched in 2023 to help consumers identify compliant, secure IoT devices.

    • FDA guidance: Mandates secure design and update strategies for networked medical devices, such as insulin pumps or pacemakers.



  1. Privacy as Product Differentiator

Privacy and data security image
Image / IBM

  • Principle:

    Respecting user privacy by design can become a competitive advantage, not just a compliance checkbox.


  • Practices:

    • Minimize data collection to only what is essential for functionality.

    • Enable on-device processing where possible to reduce cloud dependency.

    • Provide clear user consent flows and data visibility controls.


  • Examples:

    • Apple promotes on-device data handling (e.g., health metrics, FaceID processing) to reduce cloud exposure and position itself as privacy-centric - and is willing for now to pay the price in ai performance.

    • EU GDPR requires data minimization and user access to collected personal data - principles increasingly echoed globally.



  1. Third-Party Audits and Certifications


Trust Layer audits and certification
Audits and certification / Trust Layer
  • Principle:

    Independent security assessments validate vendor claims for seecurity, safety, and privacy by identifying vulnerabilities that developers may overlook - before products reach mass production.


  • Practices:

    • Engage third-party security labs for penetration testing and protocol validation.

    • Launch vulnerability disclosure and bug bounty programs.

    • Use third-party compliance frameworks to demonstrate maturity.


  • Examples:

    • HackerOne and Bugcrowd power responsible disclosure and bug-bounty programs for companies like DJI, Fitbit, and General Motors.

    • SOC 2, ISO/IEC 27001, and Common Criteria certifications are increasingly applied to IoT platforms handling sensitive data.

    • Google and Microsoft routinely publish security audit results and threat modeling outcomes.



  1. Empowering Users

    Empowering users
    Empowering users / Wix Ai
  2. Principle:

    Users are the first line of defense. They must be given clear information and tools to protect their own device and data.


  • Practices:

    • Provide educational prompts during onboarding about security and privacy settings.

    • Show clear device states (e.g., “camera is on” lights, permission icons).

    • Offer simple interfaces for permission management, device logs, and firmware updates.


  • Examples:

    • Ring added a security control center within its app after high-profile hacks to help users manage linked devices and logins.

    • iOS and Android show ongoing indicators (dots, status bars) when sensors like camera or microphone are in use.

    • TP-Link allows users to view and revoke cloud access via its mobile app’s “device status” dashboard.



  1. The Limits of Automation

Charlie Chaplin's Modern Times
Charlie Chaplin's Modern Times / Britannica

  • Principle:

    Automation can enhance usability but must not obscure control or security-related transparency.


  • Practices:

    • Allow users to override or disable automated decisions, especially when data is shared externally.

    • Avoid black-box machine learning that affects safety-critical functionality without explainability.

    • Require explicit user input for actions like unlocking doors or authorizing transactions.



  • Examples:

  • Tesla allows manual override of its Auto-Pilot system and requires driver engagement for safety.

  • Smart thermostats like Ecobee allow manual control even when running AI-based energy optimization routines.

  • AI-driven door locks should include fallback PINs or key overrides to mitigate lockouts from false positives.



  1. Toward Ethical Smart Design



  • Principle:

    Security, privacy, safety, and user autonomy must be baked into the product development and operational lifecycle, not treated as afterthoughts.


  • Practices:

    • Cross-functional collaboration between product, security, legal, and ethics teams during ideation and testing.

    • Prioritize user agency: require consent, offer opt-outs, and make data use legible and granular.

    • Consider long-term social consequences of data collection, behavioral nudging, and opaque monetization models.



What's Not To Be Done?


Out in the Open

Governments declare and conduct policies, along with international governing bodies, standards organizations - to protect their citizens from cyber security threats. These actions may start in legislation, regulatory actions, international cooperation.


On the other hand, governments have been known in the past to use regulation as a non-tariff-barrier in order to impede entry or rapid expansion of foreign companies, protect domestic manufacturing.


An outstanding example to this practice is the US ban on Huawei, the Chinese telecommunication giant:


In May 2019, the U.S. added Huawei to the Department of Commerce’s Entity List, barring American firms from doing business with it without a license. This followed the 2019 NDAA, which had already banned federal use of Huawei gear over national security concerns tied to the company’s links to the Chinese government.


The FCC later banned Huawei equipment sales, citing security risks. These actions disrupted Huawei’s global operations, cut revenues, and pushed it to develop alternatives to U.S. technology. U.S. pressure also led allied nations to reassess Huawei’s role in their critical infrastructure.


(You can read more about the impact on Huawei business in Wired article from June '17)


While the official rationale emphasized national security, many analysts highlight protectionist motives. Huawei’s leadership in 5G challenged U.S. tech dominance, especially given the lack of a domestic telecom giant.


The ban aligned with broader efforts to decouple from Chinese supply chains and stimulate domestic tech investment. Outlets like The Economist (Here too), Brookings, and CSIS view the policy as both strategic and economically motivated.



An even more recent the ban on Chinese electric vehicle use in government agencies in the UK was cast In 2025 by the UK government implementing measures restricting the presence of Chinese-made electric vehicles (EVs) at sensitive military sites.


Reports indicated that staff at facilities like RAF Wyton were instructed to park such vehicles at a distance from key buildings due to cybersecurity concerns. The Ministry of Defence’s directive was based on fears that embedded technology in these EVs could be exploited for espionage, potentially compromising sensitive data.


While not a blanket ban, this policy reflects the UK’s cautious approach to foreign technology in critical areas. The move aligns with broader efforts to safeguard national security amidst increasing integration of connected technologies in everyday assets.



In the Shadows


While governments publicly champion the protection of citizens and businesses from espionage, they frequently employ similar tactics themselves - typically under the banner of national security. Yet in some cases, these actions extend to surveilling political rivals, activists, and journalists, often without clear justification or judicial oversight, revealing a troubling use of power beyond legitimate defense.


Governments have repeatedly engaged in covert surveillance of their own citizens, targeting journalists, activists, and political opponents. In Hungary, Pegasus spyware was used to monitor investigative reporters and critics of the regime.


In Italy, intelligence officials were investigated for unlawfully surveilling judges and journalists tied to anti-corruption and mafia cases. In the U.S., the NSA’s mass surveillance programs, exposed by Edward Snowden, revealed widespread, warrantless data collection.


Germany’s BND also spied on domestic and foreign reporters. These cases show how democratic governments, citing security, often bypass oversight to suppress dissent.



What Can and Should Product Leaders Do About These Tendencies


Product leaders occupy a critical junction between innovation, user experience, and responsibility. As smart products become vectors for surveillance - whether by governments, third parties, or internal misuse - leaders must take an active stance in defending user trust and civil liberties.


  1. Design with abuse scenarios in mind

Anticipate not only technical failure, but intentional misuse of features (e.g., always-on microphones, location sharing). Ask: What happens if this is used against the user?


  1. Push for transparency and control

Ensure clear communication around data collection, storage, and sharing. Give users genuine control - not just buried toggles or legalese.


  1. Advocate for secure, privacy-respecting defaults

Don’t wait for regulators. Enforce end-to-end encryption, disable unnecessary telemetry, and avoid dark patterns that coerce consent.


  1. Challenge questionable business requirements

Push back when leadership or clients demand functionality that compromises user agency. Align product integrity with long-term brand trust.


  1. Lead internally by example

Foster a culture where security, ethics, and user respect are non-negotiable. Collaborate with legal, privacy, and engineering teams early in the roadmap.


Are you too considering adding continuous value and upsell opportunities to your connected product?...




Further reading:

© 2024 TheRoad - All Rights Reserved 

Privacy Policy  |  Accessibility

bottom of page