Imperva Firewall Breach Exposes Customer API Keys, SSL Certificates

The issue impacts users of the vendor’s Cloud WAF product.

Imperva, the security vendor, has made a security breach public that affects customers using the Cloud Web Application Firewall (WAF) product.

Formerly known as Incapsula, the Cloud WAF analyzes requests coming into applications, and flags or blocks suspicious and malicious activity.

Users’ emails and hashed and salted passwords were exposed, and some customers’ API keys and SSL certificates were also impacted. The latter are particularly concerning, given that they would allow an attacker to break companies’ encryption and access corporate applications directly.

Imperva has implemented password resets and 90-day password expiration for the product in the wake of the incident.

Imperva said in a website notice that they learned about the exposure via a third party on August 20. However, the affected customer database contained old Incapsula records that go up to Sept. 15, 2017 only.

imperva

“We profoundly regret that this incident occurred and will continue to share updates going forward,” Imperva noted. “In addition, we will share learnings and new best practices that may come from our investigation and enhanced security measures with the broader industry. We continue to investigate this incident around the clock and have stood up a global, cross-functional team.”

Imperva also said that it “informed the appropriate global regulatory agencies” and is in the process of notifying affected customers directly.

When asked for more details (such as if this is a misconfiguration issue or a hack, where the database resided and how many customers are affected), Imperva told Threatpost that it is not able to provide more information for now.

Source: https://threatpost.com/imperva-firewall-breach-api-keys-ssl-certificates/147743/

Discord was down due to Cloudflare outage affecting parts of the web

Popular chat service Discord experienced issues today due to network problems at Cloudflare and a wider internet issue. The app was inaccessible for its millions of users, and even Discord’s website and status pages were struggling. Discord’s problems could be traced to an outage at Cloudflare, a content delivery network. Cloudflare started experiencing issues at 7:43AM ET, and this caused Discord, Feedly, Crunchyroll, and many other sites that rely on its services to have partial outages.

Cloudflare says it’s working on a “possible route leak” affecting some of its network, but services like Discord have been inaccessible for nearly 45 minutes now. “Discord is affected by the general internet outage,” says a Discord statement on the company’s status site. “Hang tight. Pet your cats.”

“This leak is impacting many internet services including Cloudflare,” says a Cloudflare spokesperson. “We are continuing to work with the network provider that created this route leak to remove it.” Cloudflare doesn’t name the network involved, but Verizon is also experiencing widespread issues across the East Coast of the US this morning. Cloudflare notes that “the network responsible for the route leak has now fixed the issue,” so services should start to return to normal shortly.

Cloudfare explained the outage in an additional statement, commenting that “Earlier today, a widespread BGP routing leak affected a number of Internet services and a portion of traffic to Cloudflare. All of Cloudflare’s systems continued to run normally, but traffic wasn’t getting to us for a portion of our domains. At this point, the network outage has been fixed and traffic levels are returning to normal.”

Source: https://www.theverge.com/2019/6/24/18715308/discord-down-outage-cloudflare-problems-crunchyroll-feedly

When 911 Goes Down: Why Voice Network Security Must Be a Priority

When there’s a DDoS attack against your voice network, are you ready to fight against it?

An estimated 240 million calls are made to 911 in the US each year. With the US population estimated at more than 328 million people as of November 2018, this means each US resident makes, on average, more than one 911 call per year. 911 is a critical communications service that ensures the safety and individual welfare of our nation’s people.

So, what happens when the system goes down?

Unfortunately, answers can include delays in emergency responses, reputational damage to your brand or enterprise by being associated with an outage, and even loss of life or property. We have seen very recent examples of how disruption in 911 services can impact municipalities. For example, days after Atlanta was struck by a widespread ransomware attack, news broke of a hacking attack on Baltimore’s computer-assisted dispatch system, which is used to support and direct 911 and other emergency calls. For three days, dispatchers were forced to track emergency calls manually as the system was rebuilt — severely crippling their ability to handle life-and-death situations.

In 2017, cybersecurity firm SecuLore Solutions reported that there had been 184 cyberattacks on public safety agencies and local governments within the previous two years. 911 centers had been directly or indirectly attacked in almost a quarter of those cases, most of which involved distributed denial-of-service (DDoS) attacks.

Unfortunately, these kinds of DDoS attacks will continue unless we make it a priority to improve the security of voice systems, which remain dangerously vulnerable. This is true not just for America’s emergency response networks, but also for voice networks across a variety of organizations and industries.

The Evolving DDoS Landscape
In today’s business world, every industry sector now relies on Internet connectivity and 24/7 access to online services to successfully conduct sales, stay productive, and communicate with customers. With each DDoS incident costing $981,000 on average, no organization can afford to have its systems offline.

This is a far cry from the early days of DDoS, when a 13-year-old studentdiscovered he could force all 31 users of the University of Illinois Urbana-Champaign’s CERL instruction system to power off at once. DDoS was primarily used as a pranking tool until 2007, when Estonian banks, media outlets, and government bodies were taken down by unprecedented levels of Internet traffic, which sparked nationwide riots.

Today, DDoS techniques have evolved to use Internet of Things devices, botnets, self-learning algorithms, and multivector techniques to amplify attacks that can take down critical infrastructure or shut down an organization’s entire operations. Last year, GitHub experienced the largest-ever DDoS attack, which relied on UDP-based memcached traffic to boost its power. And just last month, GitHub experienced a DDoS attack that was four times larger.

As these attacks become bigger, more sophisticated, and more frequent, security measures have also evolved. Organizations have made dramatic improvements in implementing IP data-focused security strategies; however, IP voice and video haven’t received the same attention, despite being equally vulnerable. Regulated industries like financial services, insurance, education, and healthcare are particularly susceptible — in 2012, a string of DDoS attacksseverely disrupted the online and mobile banking services of several major US banks for extended periods of time. Similarly, consider financial trading — since some transactions are still done over the phone, those jobs would effectively grind to a halt if a DDoS attack successfully took down their voice network.

As more voice travels over IP networks and as more voice-activated technologies are adopted, the more DDoS poses a significant threat to critical infrastructure, businesses, and entire industries. According to a recent IDC survey, more than 50% of IT security decision-makers say their organization has been the victim of a DDoS attack as many as 10 times in the past year.

Say Goodbye to DDoS Attacks
For the best protection from DDoS attacks, organizations should consider implementing a comprehensive security strategy that includes multiple layers and technologies. Like any security strategy, there is no panacea, but by combining the following solutions with other security best practices, organizations will be able to better mitigate the damages of DDoS attacks:

  • Traditional firewalls: While traditional firewalls likely won’t protect against a large-scale DDoS attack, they are foundational in helping organizations protect data across enterprise networks and for protection against moderate DDoS attacks.
  • Session border controllers (SBCs): What traditional firewalls do for data, SBCs do for voice and video data, which is increasingly shared over IP networks and provided by online services. SBCs can also act as session managers, providing policy enforcement, load balancing and network/traffic analysis. (Note: Ribbon Communications is one of a number of companies that provide SBCs.)
  • Web application firewalls: As we’ve seen with many DDoS attacks, the target is often a particular website or online service. And for many companies these days, website uptime is mission-critical. Web application firewalls extend the power of traditional firewalls to corporate websites.

Further, when these technologies are paired with big data analytics and machine learning, organizations can better predict normative endpoint and network behavior. In turn, they can more easily identify suspicious and anomalous actions, like the repetitive calling patterns representative of telephony DoS attacks or toll fraud.

DDoS attacks will continue to be a threat for organizations to contend with. Cybercriminals will always look toward new attack vectors, such as voice networks, to find the one weak spot in even the most stalwart of defenses. If organizations don’t take the steps necessary to make voice systems more secure, critical infrastructure, contact centers, healthcare providers, financial services and educational institutions will certainly fall victim. After all, it only takes one overlooked vulnerability to let attackers in.

Source: https://www.darkreading.com/attacks-breaches/when-911-goes-down-why-voice-network-security-must-be-a-priority-/a/d-id/1333782

Ad Fraud 101: How Cybercriminals Profit from Clicks

Fraud is and always will be a cornerstone of the cybercrime community. The associated economic gains provide substantial motivation for today’s malicious actors, which is reflected in the rampant use of identity and financial theft, and ad fraud. Fraud is, without question, big business. You don’t have to look far to find websites, on both the clear and the darknet, that profit from the sale of your personal information.

Fraud-related cyber criminals are employing an evolving arsenal of tactics and malware designed to engage in these types of activities. What follows is an overview.

Digital Fraud

Digital fraud—the use of a computer for criminal deception or abuse of web enabled assets that results in financial gain—can be categorized and explained in three groups for the purpose of this blog: basic identity theft with the goal of collecting and selling identifiable information, targeted campaigns focused exclusively on obtaining financial credentials, and fraud that generates artificial traffic for profit.

Digital fraud is its own sub-community consistent with typical hacker profiles. You have consumers dependent on purchasing stolen information to commit additional fraudulent crime, such as making fake credit cards and cashing out accounts, and/or utilizing stolen data to obtain real world documents like identification cards and medical insurance. There are also general hackers, motivated by profit or disruption, who publicly post personally identifiable information that can be easily scraped and used by other criminals. And finally, there are pure vendors who are motivated solely by profit and have the skills to maintain, evade and disrupt at large scales.

  • Identity fraud harvests complete or partial user credentials and personal information for profit. This group mainly consists of cybercriminals who target databases with numerous attack vectors for the purposes of selling the obtained data for profit. Once the credentials reach their final destination, other criminals will use the data for additional fraudulent purposes, such as digital account takeover for financial gains.
  • Banking fraud harvests banking credentials, digital wallets and credit cards from targeted users. This group consists of highly talented and focused criminals who only care about obtaining financial information, access to cryptocurrency wallets or digitally skimming credit cards. These criminals’ tactics, techniques and procedures (TTP) are considered advanced, as they often involve the threat actor’s own created malware, which is updated consistently.
  • Ad fraud generates artificial impressions or clicks on a targeted website for profit. This is a highly skilled group of cybercriminals that is capable of building and maintaining a massive infrastructure of infected devices in a botnet. Different devices are leveraged for different types of ad fraud but generally, PC-based ad fraud campaigns are capable of silently opening an internet browser on the victim’s computer and clicking on an advertisement

Ad Fraud & Botnets

Typically, botnets—the collection of compromised devices that are often referred to as a bot and controlled by a malicious actor, a.k.a. a “bot herder—are associated with flooding networks and applications with large volumes of traffic. But they also send large volumes of malicious spam, which is leveraged to steal banking credentials or used to conduct ad fraud.

However, operating a botnet is not cheap and operators must weigh the risks and expense of operating and maintaining a profitable botnet. Generally, a bot herder has four campaign options (DDoS attacks, spam, banking and ad fraud) with variables consisting of research and vulnerability discovery, infection rate, reinfection rate, maintenance, and consumer demand.

With regards to ad fraud, botnets can produce millions of artificially generated clicks and impressions a day, resulting in a financial profit for the operators. Two recent ad fraud campaigns highlight the effectiveness of botnets:

  • 3ve, pronounced eve, was recently taken down by White Owl, Google and the FBI. This PC-based botnet infected over a million computers and utilized tens of thousands of websites for the purpose of click fraud activities. The infected users would never see the activity conducted by the bot, as it would open a hidden browser outside the view of the user’s screen to click on specific ads for profit.
  • Mirai, an IoT-based botnet, was used to launch some of the largest recorded DDoS attacks in history. When the co-creators of Mirai were arrested, their indictments indicated that they also engaged in ad fraud with this botnet. The actors were able to conduct what is known as an impression fraud by generating artificial traffic and directing it at targeted sites for profit. 

The Future of Ad Fraud

Ad fraud is a major threat to advertisers, costing them millions of dollars each year. And the threat is not going away, as cyber criminals look for more profitable vectors through various chaining attacks and alteration of the current TTPs at their disposal.

As more IoT devices continue to be connected to the Internet with weak security standards and vulnerable protocols, criminals will find ways to maximize the profit of each infected device. Currently, it appears that criminals are looking to maximize their new efforts and infection rate by targeting insecure or unmaintained IoT devices with a wide variety of payloads, including those designed to mine cryptocurrencies, redirect users’ sessions to phishing pages or conduct ad fraud.

Source: https://securityboulevard.com/2019/01/ad-fraud-101-how-cybercriminals-profit-from-clicks/

DDoS Protection is the Foundation for Application, Site and Data Availability

When we think of DDoS protection, we often think about how to keep our website up and running. While searching for a security solution, you’ll find several options that are similar on the surface. The main difference is whether your organization requires a cloud, on-premise or hybrid solution that combines the best of both worlds. Finding a DDoS mitigation/protection solution seems simple, but there are several things to consider.

It’s important to remember that DDoS attacks don’t just cause a website to go down. While the majority do cause a service disruption, 90 percent of the time it does not mean a website is completely unavailable, but rather there is a performance degradation. As a result, organizations need to search for a DDoS solution that can optimize application performance and protect from DDoS attacks. The two functions are natural bedfellows.

The other thing we often forget is that most traditional DDoS solutions, whether they are on-premise or in the cloud, cannot protect us from an upstream event or a downstream event.

  1. If your carrier is hit with a DDoS attack upstream, your link may be fine but your ability to do anything would be limited. You would not receive any traffic from that pipe.
  2. If your infrastructure provider goes down due to a DDoS attack on its key infrastructure, your organization’s website will go down regardless of how well your DDoS solution is working.

Many DDoS providers will tell you these are not part of a DDoS strategy. I beg to differ.

Finding the Right DDoS Solution

DDoS protection was born out of the need to improve availability and guarantee performance.  Today, this is critical. We have become an application-driven world where digital interactions dominate. A bad experience using an app is worse for customer satisfaction and loyalty than an outage.  Most companies are moving into shared infrastructure environments—otherwise known as the “cloud”— where the performance of the underlying infrastructure is no longer controlled by the end user.

  1. Data center or host infrastructure rerouting capabilities gives organizations the ability to reroute traffic to secondary data centers or application servers if there is a performance problem caused by something that the traditional DDoS prevention solution cannot negate. This may or may not be caused by a traditional DDoS attack, but either way, it’s important to understand how to mitigate the risk from a denial of service caused by infrastructure failure.
  2. Simple-to-use link or host availability solutions offer a unified interface for conducting WAN failover in the event that the upstream provider is compromised. Companies can use BGP, but BGP is complex and rigid. The future needs to be simple and flexible.
  3. Infrastructure and application performance optimization is critical. If we can limit the amount of compute-per-application transactions, we can reduce the likelihood that a capacity problem with the underlying architecture can cause an outage. Instead of thinking about just avoiding performance degradation, what if we actually improve the performance SLA while also limiting risk? It’s similar to making the decision to invest your money as opposed to burying it in the ground.

Today you can look at buying separate products to accomplish these needs but you are then left with an age old problem: a disparate collection of poorly integrated best-of-breed solutions that don’t work well together.

These products should work together as part of a holistic solution where each solution can compensate and enhance the performance of the other and ultimately help improve and ensure application availability, performance and reliability. The goal should be to create a resilient architecture to prevent or limit the impact of DoS and DDoS attacks of any kind.

Source: https://securityboulevard.com/2018/09/ddos-protection-is-the-foundation-for-application-site-and-data-availability/

Loss of Customer Trust and Confidence Biggest Consequence of DDoS Attacks

A new study from Corero Network Security has revealed that the most damaging consequence of a distributed denial-of-service (DDoS) attack for a business is the erosion of customer trust and confidence.

The firm surveyed IT security professionals at this year’s Infosecurity Europe, with almost half (42%) of respondents stating loss of customer trust and confidence as the worst effect of suffering DDoS, with just 26% citing data theft as the most damaging.

Third most popular among those polled was potential revenue loss (13%), followed by the threat of intellectual property theft (10%).

“Network and web services availability are crucial to ensuring customer satisfaction and sustaining customer trust and confidence in a brand,” said Ashley Stephenson, CEO at Corero Network Security. “These indicators are vital to both the retention and acquisition of customers in highly competitive markets. When an end user is denied access to internet-facing applications or network outages degrade their experience, it immediately impacts brand reputation.”

Corero’s findings come at a time when DDoS attacks continue to cause havoc for organizations around the world.

Link11’s Distributed Denial of Service Report for Europe revealed that DDoS attacks remained at a high level during Q2 2018, with attackers focusing on European targets 9,325 times during the period of April-June. That equated to an average of 102 attacks per day.

“The cyber-threat landscape has become increasingly sophisticated and companies remain vulnerable to DDoS because many traditional security infrastructure products, such as firewalls and IPS, are not sufficient to mitigate modern attacks,” added Corero’s Stephenson. “Proactive DDoS protection is a critical element in proper cybersecurity protection against loss of service and the potential for advanced, multi-modal attack strategies.”

“With our digital economy utterly dependent upon access to the internet, organizations should think carefully about taking steps to proactively protect business continuity, particularly including DDoS mitigation.”

Source: https://www.infosecurity-magazine.com/news/loss-trust-confidence-ddos/

How to Improve Website Resilience for DDoS Attacks – Part II – Caching

In the first post of this series, we talked about the practices that will optimize your site and increase your website’s resilience to DDoS attacks. Today, we are going to focus on caching best practices that can reduce the chances of a DDoS attack bringing down your site.

Website caching is a technique to store content in a ready-to-go state without the need (or with less) code processing. When a CDN is in place, cache stores the content in a server location closer to the visitor. It’s basically a point-in-time photograph of the content.

Caching

When a website is accessed, the server usually needs to compile the website code, display the end result to the visitor, and provide the visitor with all the website’s assets. This all takes a toll on your server resources, slowing down the total page load time. To avoid this overhead, it’s necessary to leverage certain types of caching whenever possible.

Caching not only will decrease load time indications, such as time to first byte (TTFB), it also saves your server resources.

Types of Caching

There are all sorts of caching types and strategies, but we won’t cover them all. In this article, we’ll approach three that we see most in practice.

Static Files

The first type is the simplest one, called static files caching.

Images, videos, CSS, JavaScript, and fonts should always be served from a content delivery network(CDN). These network providers operate thousands of servers, spread out across global data centers. This means they can deliver more data much faster than your server ever could on its own.

When using a CDN, the chances of your server suffering from bandwidth exhaustion attacks are minimal.

Your website will also be much faster given the fact that a large portion of website content is composed of static files, and they would be served by the CDN.

Page Caching

This is definitely the most powerful type of cache. The page caching will convert your dynamic website into HTML pages when possible, making the website a lot faster and decreasing the server resource usage.

A while ago, I wrote an article about Testing the Impacts of Website Caching Tools.

In that article, with the help of a simple caching plugin, the web server was able to provide 4 times more requests using ¼ of the server resources when compared to the test without the caching plugin.

However, as you may know not every page is “cacheable”. This leads us to the next type…

In-Memory Caching

By using a software such as Redis or Memcached, your website will be able to retrieve part of your database information straight from the server memory.

Using in-memory caching improves the response time of SQL queries. It also decreases the volume of read and write operations on the web server disk.

All kinds of websites should be able to leverage in-memory caching, but not every hosting provider supports it. Make sure your hosting does before trying to use such technology.

Conclusion

We highly recommend you to use caching wisely in order to spare your server bandwidth and to make your website work faster and better.

Or Website Application Firewall (WAF) provides a variety of caching options that can suit your website needs. It also works as a CDN, improving your website performance. Not only do we protect your website from DDoS attacks, but we also make it up to 90% faster with our WAF.

We are still planning to cover other best practices about how to improve website resilience for DDoS attacks in other posts. Subscribe to our email feed and don’t miss our educational content based on research from our website security team.

Source: https://securityboulevard.com/2018/08/how-to-improve-website-resilience-for-ddos-attacks-part-ii-caching/

FCC Admits It Lied About the DDoS Attack During Net Neutrality Comment Process – Ajit Pai Blames Obama

During the time the Federal Communications Commission (FCC) was taking public comments ahead of the rollback of net neutrality rules, the agency had claimed its comments system was knocked offline by distributed denial-of-service (DDoS) attacks.

These attacks were used to question the credibility of the comment process, where millions of Americans had voiced against the net neutrality rollback. The Commission then chose to ignore the public comments altogether.

FCC now admits it’s been lying about these attacks all this time

No one bought the FCC’s claims that its comment system was targeted by hackers during the net neutrality comment process. Investigators have today validated those suspicions revealing that there is no evidence to support the claims of DDoS attacks in 2017. Following the investigation that was carried out after lawmakers and journalists pushed the agency to share the evidence of these attacks, the FCC Chairman Ajit Pai has today released a statement, admitting that there was no DDoS attack.

This statement would have been surprising coming from Pai – an ex-Verizon employee who has continued to disregard public comments, stonewall journalists’ requests for data, and ignore lawmakers’ questions – if he hadn’t thrown the CIO under the bus, taking no responsibility whatsoever for the lies. In his statement, Pai blamed the former CIO and the Obama administration for providing “inaccurate information about this incident to me, my office, Congress, and the American people.”

He went on to say that the CIO’s subordinates were scared of disagreeing with him and never approached Pai. If all of that is indeed true, the Chairman hasn’t clarified why he wouldn’t demand to see the evidence despite everyone out of the agency already believing that the DDoS claim was nothing but a lie to invalidate the comment process.

“It has become clear that in addition to a flawed comment system, we inherited from the prior Administration a culture in which many members of the Commission’s career IT staff were hesitant to express disagreement with the Commission’s former CIO in front of FCC management. Thankfully, I believe that this situation has improved over the course of the last year. But in the wake of this report, we will make it clear that those working on information technology at the Commission are encouraged to speak up if they believe that inaccurate information is being provided to the Commission’s leadership.”

The statement comes as the result of an independent investigation by the Government Accountability Office that is to be published soon. However, looking at Pai’s statement it is clear what this report is going to say.

As a reminder, the current FCC leadership didn’t only concoct this story of the DDoS attack. It had also tried to bolster its false claims by suggesting that this wasn’t the first such incident as the FCC had suffered a similar attack in 2014 under the former chairman Tom Wheeler. It had also tried to claim that Wheeler had lied about the true nature of the attack back in 2014 to save the agency from embarrassment. The former Chairman then went on record to call on Pai’s FCC for lying to the public as there was no cyberattack under his leadership.

Pai throws CIO under the bus; takes no responsibility

And now it appears the FCC was also lying about the true nature of the failure of comment system in 2017. In his statement released today, Pai is once again blaming [PDF] the Obama administration for feeding him inaccurate information.

I am deeply disappointed that the FCC’s former [CIO], who was hired by the prior Administration and is no longer with the Commission, provided inaccurate information about this incident to me, my office, Congress, and the American people. This is completely unacceptable. I’m also disappointed that some working under the former CIO apparently either disagreed with the information that he was presenting or had questions about it, yet didn’t feel comfortable communicating their concerns to me or my office.

It remains unclear why the new team that replaced Bray nearly a year ago didn’t debunk what is being called a “conspiracy theory” and came clean about it.

Some redacted emails received through the Freedom of Information Act (FOIA) by the American Oversight had previously revealed that the false theory around 2014 cyberattack in order to justify 2017 attack also appeared in a draft copy of a blog post written on behalf of Pai. That draft was never published online to keep Pai’s hands clean since there was no evidence to support FCC’s claims of a malicious attack. These details were then instead sent out to media through which this narrative was publicized.

“The Inspector General Report tells us what we knew all along: the FCC’s claim that it was the victim of a DDoS attack during the net neutrality proceeding is bogus,” FCC Commissioner Jessica Rosenworce wrote. “What happened instead is obvious – millions of Americans overwhelmed our online system because they wanted to tell us how important internet openness is to them and how distressed they were to see the FCC roll back their rights. It’s unfortunate that this agency’s energy and resources needed to be spent debunking this implausible claim.”

Source: https://wccftech.com/fcc-admits-lied-ddos-ajit-pai-obama/

GDPR: A tool for your enemies?st

Every employee at your organisation should be prepared to deal with right to be forgotten requests.

It’s estimated that 75% of employees will exercise their right to erasure now GDPR (General Data Protection Regulation) has come into effect. However, less than half of organisations believe that they would be able to handle a ‘right to be forgotten’ (RTBF) request without any impact on day-to-day business.

These findings highlight the underlying issues we’re seeing in the post-GDPR era and how the new regulations put businesses at risk of being non-compliant. What is also worrying, is that there are wider repercussions for organisations not being prepared to handle RTBF requests.

No matter how well business is conducted, there is always the possibility of someone who holds a grudge against the company and wants to cause disruption to daily operations. One way to do this, without resorting to a standard cyber-attack, is through inundating an organisation with RTBF requests. Especially when the company struggles to complete one request, this can drain a company’s resources and grind the business to a halt. In addition to this, failing to comply with the requests in a timely manner can result in a non-compliance issue – a double whammy.

An unfortunate consequence of the new GDPR regulations is that the right to erasure is free to submit, meaning it is more likely customers or those with a grudge will request to have their data removed. There are two ways this can be requested. The first is a simple opt-out, to remove the name – usually an email address – from marketing campaigns. The other is a more time consuming, complex discovery and removal of all applicable data. It is this second type of request where there is a potential for hacktivists, be-grudged customers, or other cyber-attackers to weaponise the regulation requirement.

One RTBF request is relatively easy to handle – as long as the company knows where its data is stored of course – and the organisation actually has a month to complete the request from the day it was received. However, if a company is inundated with requests coming in on the same or consecutive days, it becomes difficult to manage and has the potential to heavily impact daily operations. This kind of attack is comparable to Distributed Denial of Service (DDoS) attacks – for example the attack on the UK National Lottery last year which saw its entire online and mobile capabilities knocked out for hours because cyber criminals flooded the site with traffic – with companies becoming overloaded with so many requests that it has to stop their services entirely.

When preparing for a flood of RTBF requests, it is essential that all organisations have a plan in place that streamlines processes for discovery and deletion of customer data, making it as easy as possible to complete multiple requests simultaneously.

Don’t let your weakest link be your downfall

The first thing to consider is whether or not the workforce is actually aware of what to do should a RTBF request come in (let alone hundreds). Educating all employees on what to do should a request be made – including who in the company to notify and how to respond to the request – is essential in guaranteeing an organisation is prepared. It will mean that any RTBF request is dealt with both correctly and in a timely manner. The process must also have clearly defined responsibilities and actions able to be audited. For companies with a DPO (Data Protection Officer) or someone who fulfils that role, this is the place to begin this process.

Discovering data is the best defence

The key to efficiency in responding to RTBF requests is discovering the data. This means the team responsible for the completion of requests is fully aware of where all the data for the organisation is stored. Therefore, a complete list of where the data can be found – and how to find it – is crucial. While data in structured storage such as a database or email is relatively simple to locate and action, it is the unstructured data, such as reports and files, which is difficult to find and is the biggest culprit of draining time and resources.

Running a ‘data discovery’ exercise is invaluable in helping organisations achieve an awareness of where data is located, as it finds data on every system and device from laptops and workstations to servers and cloud drives. Only when you know where all critical data is located, can a team assess its ability to delete it and, where applicable, remove all traces of a customer. Repeating the exercise will highlight any gaps and help indicate where additional tools may be required to address the request. Data-At-Rest scanning is frequently found as one part of a Data Loss Prevention (DLP) solution.

Stray data – a ticking time bomb

Knowing where data is stored within the organisation isn’t the end of the journey however. The constant sharing of information with partners and suppliers also has to be taken into account – and for this, understanding the data flow into and out of the company is important. Shared responsibility clauses within GDPR rules means that all partners involved with critical data are liable should a breach happen or a RTBF request cannot be completed. If critical data sitting with a partner is not tracked by the company that received the RTBF request, it makes it impossible to truly complete it and the organisation could face fines of up to 20 million EUR (or 4% of their global turnover). Therefore, it’s even more important to know how and where critical data is moving at all times, minimising the sharing of information to only those who really need to know.

While there is no silver bullet to prevent stray data, there are a number of technologies which can help to control the data which is sent both in and out of a company. Implementing automated solutions, such as Adaptive Redaction and document sanitisation, will ensure that no recipient receives unauthorised critical data. This will build a level of confidence around the security of critical data for both the organisation and the customer.

With the proper processes and technologies in place, dealing with RTBF requests is a straightforward process, whether it is a legitimate request, or an attempt by hacktivists or disgruntled customers to wreak havoc on an organisation. Streamlining data discovery processes and controlling the data flowing in and out of the company will be integral in allowing a business to complete a RTBF request and ultimately defend the organisation against a malicious use of GDPR.

Source: https://www.itproportal.com/features/gdpr-a-tool-for-your-enemies/

Hackers replacing volumetric DDoS attacks with “low and slow” attacks

By the middle of last year, organisations across the UK had woken up to the threat of DDoS attacks that had, by November, increased in frequency by a massive 91 percent over Q1 2017 and 35 percent over Q2 figures.

By the middle of last year, organisations across the UK had woken up to the threat of DDoS attacks that had, by November, increased in frequency by a massive 91 percent over Q1 2017 and 35 percent over Q2 figures. A report by CDNetworks in October revealed that more than half of all organisations had ended up as victims of DDoS attacks that regularly took their website, network or online apps down.
To deter cyber-criminals from launching powerful DDoS attacks, organisations began pouring in huge investments to shore up their defences against DDoS attacks. According to CDNetworks, average annual spending on DDoS mitigation in the UK rose to £24,200 last year, with 20 percent of all businesses investing more than £40,000 in the period.
Such investments also resulted in increased confidence amongst businesses in defending against business continuity threats such as DDoS attacks, but unfortunately, increased investments did little to stop the flow of such attacks. Kaspersky Lab’s Global IT Security Risks Survey 2017 noted that the number of DDoS attacks on UK firms doubled since 2016, affecting 33 percent of all firms.
An analysis of DDoS attacks published by Alex Cruz Farmer, security product manager at Cloudflare, has revealed that while organisations in the UK have certainly upped their spending on DDoS mitigation, cyber-criminals are now responding by switching to Layer 7 based DDoS attacks which impact applications and the end-user while ignoring traditional Layer 3 and 4 attacks whose effectiveness is no longer guaranteed. This has ensured the unabated continuance of DDoS attacks on enterprises.
“The key difference to these (Layer 7) attacks is they are no longer focused on using huge payloads (volumetric attacks), but based on Requests per Second to exhaust server resources (CPU, Disk and Memory),” he said, adding that by their very nature, Layer 7 based DDoS attacks, such as credential stuffing and content scraping, do not last too long and do not flood networks with hundreds of gigabytes of junk network traffic per second like traditional DDoS attacks.
Farmer added that Layer 7 based DDoS attacks have become so popular among hackers that Cloudflare detected around 160 attacks occurring each day, with some days spiking up to over 1000 attacks. For example, hackers are frequently carrying out enumeration attacks by identifying expensive operations in apps and hammering at them with bots to tie up resources and slow down or crash such apps. For instance, a database platform was targeted with over 100,000,000 bad requests in just 6 hours!
Indeed, the first signs of short duration yet persistent DDoS attacks were observed in May last year. Imperva Incapsula’s Global DDoS Threat Landscape Report, which analysed more than 17,000 network and application layer DDoS attacks, concluded that 80 percent of DDoS attacks lasted less than an hour, occurred in bursts, and three-quarters of targets suffered repeat assaults, in which 19 percent were attacked 10 times or more.
“These attacks are a sign of the times; launching a DDoS assault has become as simple as downloading an attack script or paying a few dollars for a DDoS-for-hire service. Using these, non-professionals can take a website offline over a personal grievance or just as an act of cyber-vandalism in what is essentially a form of internet trolling,” said Igal Zeifman, Incapsula security evangelist at Imperva to SC Media UK.
Sean Newman, director of Corero Network Security told SC Media UK that reports of increasing application layer DDoS attacks are only to be expected, as attackers continue to look for alternate vectors to meet their objectives.
“A perception that volumetric DDoS attacks are on the decline, is understandable, especially if that is your only lens on the problem.  However, when your view is based on having deployed the latest generation of always-on, real-time, DDoS protection, you will find a rather different story.
““With this lens on the problem, you will find that there is a significantly increasing trend for smaller, more calculated, volumetric DDoS attacks. In fact, Corero customers saw in increase in volumetric attacks of 50 percent compared to a year ago, with over 90 percent of those attacks being less than 5Gbps in size and over 70 percent lasting less than 10 minutes in duration,” he added.
According to Joseph Carson, chief security scientist at Thycotic, organisations are adopting various mitigation techniques to defend against targeted and repeated DDoS attacks, but many a times, such technologies also consume a lot of bandwidth and system memory and thereby interfere with smooth functioning of databases and apps.
“A Target DDoS attack is something that is very challenging to mitigate against though luckily they are periodic meaning as they occur for a short amount of time usually from days to a few weeks. Techniques that are commonly used today are mitigation techniques using Access Control Lists, Rate Limiting and filtering source IP Addresses, though each of these are resource intensive and can prevent legitimate users from getting access to your services.
“A few important lessons can be learned from Estonia’s DDoS experience back in 2007, be very careful as to what mitigation techniques you use as some companies’ responses can be more costly than the DDoS attack itself so always respond to each attack with the appropriate mitigation response.
“Though the best way to really defend and protect against future DDoS attacks is to think in terms of geographic distribution and not have any centrally dependent location of service. Estonia learned this in 2007 and has now distributed itself beyond its own country’s borders using Data Embassies,” he added.
Source: https://www.scmagazineuk.com/hackers-replacing-volumetric-ddos-attacks-with-low-and-slow-attacks/article/767988/