Deepfakes: Decoding and Defending An Introduction

Deepfakes: Decoding and Defending An Introduction

In today’s digital age, information security is a top priority for organisations of all sizes. However, the rise of deepfakes has added a new layer of complexity to this issue. Deepfakes, the byproduct of cutting-edge artificial intelligence, have emerged as a potent threat, sparking concerns about misinformation, fraud, and reputational harm across various sectors.

Deepfakes are hyper-realistic forged media generated using powerful artificial intelligence capabilities. These AI manipulations are escalating concerns across organizations about disinformation, fraud, and reputational damage. In this context, this post explores deepfakes by understanding what they are, how they can impact an organisation, and best practices for identifying and defending against them.

Understanding Deepfakes: An Introduction

The term “deepfake” is a combination of “deep learning” and “fake”. These fakes are created using algorithms that are trained to mimic human behaviour. It’s a place where reality and fiction blur together, leaving us questioning what’s real and what’s not.

Deepfakes utilize deep learning techniques to manipulate existing audio, video, or images or generate new forged media from scratch. The algorithms analyse source media content to learn how to mimic qualities like facial expressions, lip movements, voice, tone and inflections. This mimicked data is then leveraged to create realistic fakes depicting events or speech that never actually happened.

While starting with celebrity face swapping videos, deepfakes now include dangerous impersonations of political leaders, executives, or employees.

The Rise and Potential Threats of Deepfakes

The potential uses of deepfakes are many, ranging from the entertainment industry to politics. But as these fakes become more and more realistic, they pose a significant threat to other organisations too. According to a report released from the cloud service firm VMware, deepfake attacks are on the rise [1].

“Cybercriminals are now incorporating deepfakes into their attack methods to evade security controls,” said Rick McElroy, principal cybersecurity strategist at VMware. “Two out of three respondents in our report saw malicious deepfakes used as part of an attack, a 13% increase from last year, with email as the top delivery method.”

As the accessibility of deepfake creation grows, it introduces several critical risks, some of which are highlighted below with reference to recent examples:

  • Social Engineering Fraud and Scams: Deepfakes help bypass security measures dependent on photos or videos for authentication. They can aid in identity theft, impersonating executives to initiate unauthorized transactions, or manipulating financial information. For example, in 2019 criminals used deepfake audio impersonations of a company executive to trick an employee into transferring $243,000 into a fraudulent account [2].
  • Disinformation campaigns: State-sponsored or malicious actors can leverage deepfakes to spread fake news, influence opinions, or interfere in political processes. For instance, in 2018 a deepfake video of Gabon’s president Ali Bongo, who was secretly ill at the time, was spread to show him as healthy and working. This aimed to calm citizens and retain power [3].
  • Corporate Espionage: Sensitive internal communications or meetings with customers/partners can be forged to extract competitive intelligence. In 2020, a deepfake video call duped an energy company employee into handing over confidential data worth millions to a competitor [4].
  • Reputational damage: Realistic fake content can harm corporate or personal reputations and public trust. For example, in 2018, a deepfake video of Facebook CEO Mark Zuckerberg aired on CBS falsely depicting him boasting about controlling user data [5].

The Line of Defence: Deepfake Onslaught

Organizations require a multilayer strategy that include education, awareness and vigilance to detect, respond, and build resilience against deepfakes. While the following recommendations outline broadly applicable safeguards, they are not comprehensive, and additional industry- and organization-specific measures should be considered when designing a robust system of controls to protect against the diverse risks posed by deepfakes:

  • Leverage AI Deepfake Detection Tools: Leverage technology to combat deepfakes. Several tech companies are developing deepfake detection software that uses machine learning algorithms to identify fake images and videos. Deepfake detection software and other technological solutions can be effective in identifying fake content and preventing it from causing harm. Companies like Sentinel, FakeCatcher and Deeptrace offer technologies that can analyse media and identify manipulation.
  • Employee Training: First and foremost, Train employees on critical thinking to spot inconsistencies and suspicious activity through awareness programs. Deepfake videos may not perfectly sync lip movements with speech and lack natural eye movement and blinking. UC Berkeley offers online deepfake detection courses [6].
  • Identification and Strict Validation: Implement stringent communication security measures, incorporating real-time identity verification methods which include liveliness testing for video call participants and mandatory 2FA with one-time passwords or PINs and utilize established biometrics to validate identities in sensitive communication channels. These measures aim to ensure the authenticity of individuals engaged in real-time activities.

The UK National Cyber Security Centre provides guidance on best practices for video conferencing authentication [7].

  • Incident Response: Develop incident response plans for deepfake detection, personnel training, and crisis communications. The EU Commission released a Deepfake Detection Tool to support response planning [8].
  • Foster Intelligence Sharing: Partner with industry groups like the Content Authenticity Initiative  and experts like Sensity to share intelligence on evolving technologies and detection breakthroughs. Fund research efforts like the DARPA Semantic Forensics program into deep learning detection while advocating for societal awareness through education initiatives.
  • Content Watermarking: Consider watermarking your images and videos. Watermarking adds a visible or invisible identifier to the content, making it harder for someone to manipulate it without getting caught. This way, even if someone does create a deepfake, the watermark can serve as proof of its inauthenticity.

As deepfake technology advances, combating disinformation requires a combination of technology, education, awareness, and collaboration. By implementing robust, tailored safeguards and promoting coordinated action, organizations can build resilience against this emerging threat. Remaining vigilant, informed, and proactive is key to defending against deepfakes in the digital age.


[1]. VMware Report Warns of Deepfake Attacks. Link

[2]. A Voice Deepfake Was Used to Scam A CEO. Link

[3]. How misinformation helped spark an attempted coup in Gabon. Link

[4]. Deepfake Audio Steals US$243,000 From UK Company. Link

[5]. This Deepfake of Mark Zuckerberg. Link

[6]. New technique for detecting deepfake videos. Link

[7]. Video conferencing services: security guidance for organisations. Link

[8]. Tackling deepfakes in European policy. Link

[9]. Contextualizing Deepfake Threats to Organizations. Link

[10]. DARPA Semantic Forensics program. Link

[11]. Content Authenticity Initiative. Link

[12]. AI Powered Identity Verification. Link  

Posted in Artificial Intelligence, DeepFakes, Ransomware | Tagged , , , , | Comments Off on Deepfakes: Decoding and Defending An Introduction

Cloud Cost Optimization: Best Practices with Examples

In the wake of the COVID-19 pandemic, businesses worldwide faced unprecedented challenges that accelerated the adoption of cloud technologies. The shift to remote work, digital transformation, and the need for scalable solutions prompted organizations to migrate their operations to the cloud. While this transition offered numerous benefits, it also introduced concerns about controlling cloud costs. As cost efficiency is a key goal for organizations that use cloud resources, mastering cloud cost optimization has become paramount. This article outlines key best practices and provides some examples to help businesses and their technology management teams balance the costs and benefits of cloud computing.

  1. Right-sizing Instances

One of the most effective ways to lower cloud cost is by adjusting instances sizes to the optimal level. It involves matching the resources of an instance with the workload it supports. By selecting the appropriate instance type and size, businesses can avoid overprovisioning, which can result in unnecessary costs. Regularly monitor and analyse resource utilization to ensure instances are optimized and adjusted as needed.

Example: Imagine a retail website that experiences significant traffic spikes during the holiday season. Instead of keeping high-capacity instances running throughout the year, the business can analyse historical data and use auto scaling to automatically add resources during peak times and reduce them during quieter periods.

2. Choose the Right Pricing Model:

 Cloud providers offer various pricing models, such as On-Demand, Reserved Instances, and Spot Instances. The best one depends on the workload patterns and usage needs. For example, predictable workloads can benefit from Reserved Instances, while non-critical, time-flexible tasks can use Spot Instances for lower costs. By reserving instances for extended periods, businesses can secure lower rates compared to on-demand pricing. Match the model that aligns with your workload characteristics.

Example: A data processing company runs data-intensive workloads. By using Spot Instances during off-peak hours, they can significantly cut costs while still completing the tasks efficiently. However, for critical real-time processes, they might opt for On-Demand Instances to ensure consistent performance.

3. Implement Automated Scaling:

Autoscaling is a feature that cloud providers offer to automatically scale the number of instances according to demand. This dynamic approach prevents overprovisioning during peak times and eliminates the risk of under provisioning during traffic spikes. Implementing autoscaling policies based on predefined metrics or user-defined thresholds can help optimize resource allocation and save money.

Example: A video streaming service experiences variable demand throughout the day. By implementing autoscaling, the service can add more instances when demand increases (e.g., during prime-time viewing) and reduce instances during periods of lower demand, ensuring seamless user experiences without overspending.

4. Monitoring and Analytics:

Continuous monitoring of cloud resource utilization is crucial for identifying cost optimization opportunities. Leverage cloud-native monitoring services or third-party tools to gain insights into usage patterns, performance metrics, and costs. Regularly monitor your application’s usage patterns to identify trends and seasonal to identify idle or underutilized resources that can be right-sized or terminated.

Example: An e-commerce platform observes that traffic surges occur every weekend due to sales and promotions. By monitoring these patterns, the platform can predict the required resources for each weekend and auto scale accordingly. This prevents overprovisioning and minimizes costs during quieter periods.

5. Utilizing Serverless Architectures:

Serverless architectures, such as Function-as-a-Service (FaaS), enable businesses to run applications without provisioning or managing servers. By using serverless computing, businesses can save costs by paying only for the actual compute time used. Serverless architectures automatically scale to match the workload demands, further optimizing resource allocation and reducing costs.

Example: A retail bank can create Azure Functions that execute server-side logic for various banking operations, such as account opening, transaction processing, fraud detection, etc. By using Azure Functions, the bank can pay only for the compute time consumed by each function invocation and scale automatically to handle peak workloads. The bank can also chain functions together to create complex workflows, or use long-running workflows with durable functions for scenarios that require stateful orchestration.

6. Implementing Cost Allocation and Tagging

Proper cost allocation and tagging practices help organizations understand cloud costs at a granular level. By assigning appropriate tags to resources, businesses can track and allocate costs to specific teams, projects, or applications. This visibility enables better cost management, facilitates chargeback mechanisms, and encourages accountability for resource usage. This practice helps in allocating costs transparently and aids in identifying cost-saving opportunities.

Example: A multinational corporation manages multiple projects across various departments in the cloud. By tagging resources with relevant metadata (project names, teams, departments), the company can track spending for each project separately and allocate costs accurately.

  • Optimize Data Storage

Data storage can be a significant source of cloud spending, especially if the data is not managed efficiently. To optimize data storage, you should assess your data storage needs and employ strategies that can reduce the amount of data stored or the cost per unit of storage. Some of these strategies are:

  • Tiered storage: This involves storing data in different tiers based on its frequency of access, performance requirements, and retention policies. For example, you can store frequently accessed data in high-performance SSDs, infrequently accessed data in low-cost HDDs, and archival data in cold storage commonly referred as object storage.
  • Data lifecycle management: This involves defining and implementing policies for data creation, retention, deletion, and archiving. For example, you can set expiration dates for temporary data, delete obsolete data, and move old data to cheaper storage tiers or offline media.
  • Data compression: This involves reducing the size of data by applying compression algorithms or techniques. For example, you can compress text files, images, videos, or databases to save storage space and bandwidth.

By optimizing data storage, you can remove redundant or outdated data that may incur unnecessary expenses. You can also lower the cost of storing and transferring data by choosing the appropriate storage tier and format for your data. This way, you can optimize your cloud costs and improve your data management.

7. Continuous Optimization

Cloud cost optimization is an ongoing process. Regularly review your cloud environment, monitor spending, and adjust strategies as your business evolves. Keeping up with optimization practices ensures long-term cost efficiency.

Example: A software development startup undergoes rapid growth, leading to changing resource requirements. Regularly reviewing their cloud infrastructure, the startup identifies that they are overprovisioning compute resources. They adjust their autoscaling policies and rightsizing strategies to align with the current demands, ensuring optimal cost efficiency.


Each business’s cloud optimization strategy will depend on its unique requirements and goals. By implementing the best practices mentioned in this document, organizations can ensure their cloud resources are utilized optimally, resulting in significant cost savings. Regularly review and adapt these practices to align with changing business requirements and optimize cloud resource usage.


  1. “The Impact of COVID-19 on Cloud Computing,” Gartner, April 2020. Link
  2. “Cloud Cost Management and Optimization,” AWS. Link
  3. “Azure Cost Management and Billing,” Microsoft Azure. Link
  4. “Google Cloud Cost Management Tools,” Google Cloud. Link
  5. “Optimizing Your Costs on AWS,” AWS. Link
  6. “Strategies for Cost Management in the Cloud,” Forbes, October 2020. Link
  7. “Serverless architecture & the next wave of enterprise offerings” McKinsey Digital. Link
Posted in Cloud Computing | Tagged , , , , , | Comments Off on Cloud Cost Optimization: Best Practices with Examples

Sandboxing for Success: The Key to Unleashing Generative AI in Banking

Innovation is the lifeblood of progress, especially in the dynamic landscape of financial institutions. However, as these institutions navigate the ever-evolving digital realm, they often find themselves shackled by stringent security protocols and regulatory requirements. Among the casualties of this cautious approach to innovation is the adoption of generative AI, a cutting-edge technology with the potential to revolutionize operations, customer experience, and beyond. Unfortunately, the collaboration between Chief Information Security Officers (CISOs) and cross-functional teams or departmental stakeholders often serves as a roadblock rather than a catalyst for progress.

The Dilemma

At the heart of the issue lies a fundamental conflict between the imperative to innovate and the imperative to protect. Chief Information Security Officers (CISOs) are tasked with safeguarding data and meeting compliance, while business teams drive growth through technological advancement. This clash of priorities often results in lengthy approval processes that stifle experimentation with new solutions like generative AI. Generative AI, with its ability to create novel content, poses unique challenges in terms of data security and regulatory compliance. As a result, business teams in financial institutions find themselves caught in a catch-22 situation: they cannot timely experiment and embrace the potential of generative AI without risking inadvertent exposure of sensitive data, yet this cautious approach hinders innovation and competitiveness.

Bridging the Gap: A Collaborative Mindset

To overcome this stalemate, CISO teams must shed conventional bureaucratic processes and position themselves as enablers rather than obstructionists. By adopting an adaptable “can-do” mindset instead of a restrictive “no” approach, CISOs can cultivate trust with business teams. This open communication prevents teams from circumventing security measures and experimenting in siloes, reducing the risks of data breaches and unauthorized data disclosure. Through this partnership, CISOs can empower business objectives while maintaining robust security safeguards.

The Sandbox Solution

Financial regulators in countries like the UK and Bahrain are proactively backing Fintech startups by providing them access to sandbox environments for testing technology-driven innovations. Establishing a dedicated sandbox environment for such experimentation offers a controlled and isolated space where new technologies can undergo testing without compromising the security or functionality of existing production systems.

Here’s how a sandbox environment can facilitate the adoption of generative AI while addressing concerns related to data security and regulatory compliance:

1. Controlled Experimentation: A sandbox environment allows business teams to explore the capabilities of generative AI in a controlled setting. By isolating experimental activities from production systems, financial institutions can minimize the risk of data breaches and regulatory violations.

2. Rapid Iteration: With a sandbox environment in place, the approval process for experimenting with generative AI can be streamlined. CISO teams can focus on assessing the security implications within the sandbox environment, allowing business users to iterate rapidly and explore innovative use cases without unnecessary delays.

3. Compliance Assurance: By implementing robust monitoring and auditing mechanisms within the sandbox environment, financial institutions can demonstrate compliance with regulatory requirements while experimenting with generative AI. This proactive approach to compliance management instils confidence among stakeholders and regulatory authorities.

4. Knowledge Sharing: A sandbox environment fosters collaboration between CISO, cross functional teams and departmental stakeholders, facilitating knowledge sharing and cross-functional learning. By working together to address security concerns and explore the potential of generative AI, teams can leverage their collective expertise to drive innovation responsibly.

5. Risk Mitigation: Despite the inherent risks associated with experimenting with emerging technologies, a sandbox environment allows financial institutions to mitigate these risks effectively. By identifying and addressing security vulnerabilities and compliance gaps in a controlled setting, organizations can proactively minimize the likelihood of adverse outcomes in production environments.

In conclusion, the cautious approach adopted by CISO while well intentioned, often hampers the adoption of generative AI and stifles innovation in financial institutions. By embracing the concept of sandbox environments for experimentation, financial institutions can strike a balance between security and innovation, unlocking the full potential of generative AI while safeguarding data and complying with regulatory requirements. It’s time to break down the barriers and embrace a future where innovation and security go hand in hand.

Posted in Artificial Intelligence, Generative AI | Tagged , , , , , , | Leave a comment

TalkTalk website hacked: it means my personal details are compromised :-(


The last week’s news came out direct from TalkTalk was a personal shock to me as I am also one of their customer and now I have been a victim of this cyber attack. The first thing I wanted to do is to change my password for TalkTalk account but its website is still not available to do so. The most worrying thing is that there is still no answer to a fundamental questions: that what data was breached and if it was encrypted?

In an update, TalkTalk said the amount of financial data stolen from its systems was “materially lower” than expected, and said that the attack was on its public-facing website and not its core systems.

The data which may have been breached includes:

  • Names
  • Addresses
  • Dates of birth
  • Email addresses
  • Telephone numbers
  • TalkTalk account information
  • Credit card details and/or bank details

Now, believing in that my details held with TalkTalk have been compromised, I have taken below precautionary measures and would suggest those who are affected to consider doing so:

  • Change passwords:

As said above, TalkTalk website is still not available but when it do so, please change your TalkTalk account password.

If same password is also used to protect another online account, for example, banking, social media or any other essential service, passwords should also be changed.

  • Answering phone calls/emails:

Be careful, if you receive a phone call from your bank or email asking to reveal any passwords or banking details. TalkTalk and banks repeatedly said that they will never ask personal passwords or PIN’s to be revealed over the phone or via email.

  • Check your bank/credit card accounts:

Watch your bank accounts in case of any unexpected activity and report it to your bank immediately.

  • Credit monitoring.

I received an email from TalkTalk with TT231 code, which can be used at Noddle to monitor your file for the next 12 months free of cost. More details are via this link

We as consumers expect from relevant authorities who are investigating this breach to get answers to these fundamental questions for us:

  • Have TalkTalk done their required due diligence to protect sensitive data?
  • Was that data stored encrypted?
  • Was encryption keys were protected?
  • When their website was last pen tested, by whom and what was the current status of any open issues at the time of breach?
  • Was they PCI compliant at the time of breach?
  • It appears it is not first time their website was attached, did they acted upon on recommendations resulted from last breach?
Posted in Data Breaches | Comments Off on TalkTalk website hacked: it means my personal details are compromised :-(

Underworld of Hackers; how they work and what you can do to protect yourself

Always wonder, what is the value of your personal information to hackers, why they want it and what potential threats may be on your way, once they have it? In this article you will not only see how underworld of hackers work but also understand the value of your personal information, cards and even your PC; when traded in the dark web. Dell has also provided some mitigating information for both companies and individuals. Worth reading, very valuable information…….

Posted in Online Safety | Comments Off on Underworld of Hackers; how they work and what you can do to protect yourself

GCHQ employs: dyslexic and dyspraxic spies

I came across an interesting article published in Sunday Telegraph about the employment of dyslexic young people by GCHQ. More than 100 dyslexic and dyspraxic ‘neuro diverse’ analysts were employed so far. Unfortunately in many countries of the world, this problem is not diagnosed in a timely fashion; people are branded as dull, clumsy leaving them to become burden on society in one or another way.

See this impressive video. Is it not remarkable?

Posted in Nation-State | Comments Off on GCHQ employs: dyslexic and dyspraxic spies

Free Site Recovers Files Locked By Cryptolocker

Previously, I shared some info about Cryptolocker (ransomware), discovered in September 2013 which is a malware that encrypts a windows user’s data files including documents, photos and music. So what was the only option for a Windows user to get that data back: unfortunately pay ransom to hackers whatever amount they demand? But, now good news is that two security companies: FireEye and Fox-IT have launched a site: which can be used by anyone wants to recover their files locked by Cryptolocker. How the site works apparently by sending an email link that victims can use to download a recovery program to get back all of their files. Thanks to these two companies as the service is free, so enjoy 🙂

Posted in Ransomware | Comments Off on Free Site Recovers Files Locked By Cryptolocker

Why most credit hacks happening in US?

In recent years, more than 80 countries have upgraded to credit cards embedded with microchips. Cards with chips are inconceivably difficult to counterfeit, and there’s added security in every swipe: Terminals require a user’s PIN, and the information on the chip is encrypted.

Yet Americans keep using payment technology that was developed in the 1960’s. That poses a big risk: Cards with magnetic stripes deliver all your data without hiding anything. Swipe your card and the computer sees everything in plain text: your name, credit card provider, card number, expiration date and more. Continue reading…

Posted in Data Breaches | Comments Off on Why most credit hacks happening in US?

Wow….You’re infected—if you want to see your data again, pay us $300 in Bitcoins

Malware that takes computers hostage until users pay a ransom is getting meaner, and thanks to the growing prevalence of Bitcoin and other digital payment systems, it’s easier than ever for online crooks to capitalize on these “ransomware” schemes. If this wasn’t already abundantly clear, consider the experience of Nic, an Ars reader who fixes PCs for a living and recently helped a client repair the damage inflicted by a particularly nasty title known as CryptoLocker.

It started when an end user in the client’s accounting department received an e-mail purporting to come from Intuit. Continue reading…

Posted in Ransomware | Comments Off on Wow….You’re infected—if you want to see your data again, pay us $300 in Bitcoins