DDoS Protection is the Foundation for Application, Site and Data Availability

When we think of DDoS protection, we often think about how to keep our website up and running. While searching for a security solution, you’ll find several options that are similar on the surface. The main difference is whether your organization requires a cloud, on-premise or hybrid solution that combines the best of both worlds. Finding a DDoS mitigation/protection solution seems simple, but there are several things to consider.

It’s important to remember that DDoS attacks don’t just cause a website to go down. While the majority do cause a service disruption, 90 percent of the time it does not mean a website is completely unavailable, but rather there is a performance degradation. As a result, organizations need to search for a DDoS solution that can optimize application performance and protect from DDoS attacks. The two functions are natural bedfellows.

The other thing we often forget is that most traditional DDoS solutions, whether they are on-premise or in the cloud, cannot protect us from an upstream event or a downstream event.

  1. If your carrier is hit with a DDoS attack upstream, your link may be fine but your ability to do anything would be limited. You would not receive any traffic from that pipe.
  2. If your infrastructure provider goes down due to a DDoS attack on its key infrastructure, your organization’s website will go down regardless of how well your DDoS solution is working.

Many DDoS providers will tell you these are not part of a DDoS strategy. I beg to differ.

Finding the Right DDoS Solution

DDoS protection was born out of the need to improve availability and guarantee performance.  Today, this is critical. We have become an application-driven world where digital interactions dominate. A bad experience using an app is worse for customer satisfaction and loyalty than an outage.  Most companies are moving into shared infrastructure environments—otherwise known as the “cloud”— where the performance of the underlying infrastructure is no longer controlled by the end user.

  1. Data center or host infrastructure rerouting capabilities gives organizations the ability to reroute traffic to secondary data centers or application servers if there is a performance problem caused by something that the traditional DDoS prevention solution cannot negate. This may or may not be caused by a traditional DDoS attack, but either way, it’s important to understand how to mitigate the risk from a denial of service caused by infrastructure failure.
  2. Simple-to-use link or host availability solutions offer a unified interface for conducting WAN failover in the event that the upstream provider is compromised. Companies can use BGP, but BGP is complex and rigid. The future needs to be simple and flexible.
  3. Infrastructure and application performance optimization is critical. If we can limit the amount of compute-per-application transactions, we can reduce the likelihood that a capacity problem with the underlying architecture can cause an outage. Instead of thinking about just avoiding performance degradation, what if we actually improve the performance SLA while also limiting risk? It’s similar to making the decision to invest your money as opposed to burying it in the ground.

Today you can look at buying separate products to accomplish these needs but you are then left with an age old problem: a disparate collection of poorly integrated best-of-breed solutions that don’t work well together.

These products should work together as part of a holistic solution where each solution can compensate and enhance the performance of the other and ultimately help improve and ensure application availability, performance and reliability. The goal should be to create a resilient architecture to prevent or limit the impact of DoS and DDoS attacks of any kind.

Source: https://securityboulevard.com/2018/09/ddos-protection-is-the-foundation-for-application-site-and-data-availability/

Loss of Customer Trust and Confidence Biggest Consequence of DDoS Attacks

A new study from Corero Network Security has revealed that the most damaging consequence of a distributed denial-of-service (DDoS) attack for a business is the erosion of customer trust and confidence.

The firm surveyed IT security professionals at this year’s Infosecurity Europe, with almost half (42%) of respondents stating loss of customer trust and confidence as the worst effect of suffering DDoS, with just 26% citing data theft as the most damaging.

Third most popular among those polled was potential revenue loss (13%), followed by the threat of intellectual property theft (10%).

“Network and web services availability are crucial to ensuring customer satisfaction and sustaining customer trust and confidence in a brand,” said Ashley Stephenson, CEO at Corero Network Security. “These indicators are vital to both the retention and acquisition of customers in highly competitive markets. When an end user is denied access to internet-facing applications or network outages degrade their experience, it immediately impacts brand reputation.”

Corero’s findings come at a time when DDoS attacks continue to cause havoc for organizations around the world.

Link11’s Distributed Denial of Service Report for Europe revealed that DDoS attacks remained at a high level during Q2 2018, with attackers focusing on European targets 9,325 times during the period of April-June. That equated to an average of 102 attacks per day.

“The cyber-threat landscape has become increasingly sophisticated and companies remain vulnerable to DDoS because many traditional security infrastructure products, such as firewalls and IPS, are not sufficient to mitigate modern attacks,” added Corero’s Stephenson. “Proactive DDoS protection is a critical element in proper cybersecurity protection against loss of service and the potential for advanced, multi-modal attack strategies.”

“With our digital economy utterly dependent upon access to the internet, organizations should think carefully about taking steps to proactively protect business continuity, particularly including DDoS mitigation.”

Source: https://www.infosecurity-magazine.com/news/loss-trust-confidence-ddos/

How to Improve Website Resilience for DDoS Attacks – Part II – Caching

In the first post of this series, we talked about the practices that will optimize your site and increase your website’s resilience to DDoS attacks. Today, we are going to focus on caching best practices that can reduce the chances of a DDoS attack bringing down your site.

Website caching is a technique to store content in a ready-to-go state without the need (or with less) code processing. When a CDN is in place, cache stores the content in a server location closer to the visitor. It’s basically a point-in-time photograph of the content.

Caching

When a website is accessed, the server usually needs to compile the website code, display the end result to the visitor, and provide the visitor with all the website’s assets. This all takes a toll on your server resources, slowing down the total page load time. To avoid this overhead, it’s necessary to leverage certain types of caching whenever possible.

Caching not only will decrease load time indications, such as time to first byte (TTFB), it also saves your server resources.

Types of Caching

There are all sorts of caching types and strategies, but we won’t cover them all. In this article, we’ll approach three that we see most in practice.

Static Files

The first type is the simplest one, called static files caching.

Images, videos, CSS, JavaScript, and fonts should always be served from a content delivery network(CDN). These network providers operate thousands of servers, spread out across global data centers. This means they can deliver more data much faster than your server ever could on its own.

When using a CDN, the chances of your server suffering from bandwidth exhaustion attacks are minimal.

Your website will also be much faster given the fact that a large portion of website content is composed of static files, and they would be served by the CDN.

Page Caching

This is definitely the most powerful type of cache. The page caching will convert your dynamic website into HTML pages when possible, making the website a lot faster and decreasing the server resource usage.

A while ago, I wrote an article about Testing the Impacts of Website Caching Tools.

In that article, with the help of a simple caching plugin, the web server was able to provide 4 times more requests using ¼ of the server resources when compared to the test without the caching plugin.

However, as you may know not every page is “cacheable”. This leads us to the next type…

In-Memory Caching

By using a software such as Redis or Memcached, your website will be able to retrieve part of your database information straight from the server memory.

Using in-memory caching improves the response time of SQL queries. It also decreases the volume of read and write operations on the web server disk.

All kinds of websites should be able to leverage in-memory caching, but not every hosting provider supports it. Make sure your hosting does before trying to use such technology.

Conclusion

We highly recommend you to use caching wisely in order to spare your server bandwidth and to make your website work faster and better.

Or Website Application Firewall (WAF) provides a variety of caching options that can suit your website needs. It also works as a CDN, improving your website performance. Not only do we protect your website from DDoS attacks, but we also make it up to 90% faster with our WAF.

We are still planning to cover other best practices about how to improve website resilience for DDoS attacks in other posts. Subscribe to our email feed and don’t miss our educational content based on research from our website security team.

Source: https://securityboulevard.com/2018/08/how-to-improve-website-resilience-for-ddos-attacks-part-ii-caching/

FCC Admits It Lied About the DDoS Attack During Net Neutrality Comment Process – Ajit Pai Blames Obama

During the time the Federal Communications Commission (FCC) was taking public comments ahead of the rollback of net neutrality rules, the agency had claimed its comments system was knocked offline by distributed denial-of-service (DDoS) attacks.

These attacks were used to question the credibility of the comment process, where millions of Americans had voiced against the net neutrality rollback. The Commission then chose to ignore the public comments altogether.

FCC now admits it’s been lying about these attacks all this time

No one bought the FCC’s claims that its comment system was targeted by hackers during the net neutrality comment process. Investigators have today validated those suspicions revealing that there is no evidence to support the claims of DDoS attacks in 2017. Following the investigation that was carried out after lawmakers and journalists pushed the agency to share the evidence of these attacks, the FCC Chairman Ajit Pai has today released a statement, admitting that there was no DDoS attack.

This statement would have been surprising coming from Pai – an ex-Verizon employee who has continued to disregard public comments, stonewall journalists’ requests for data, and ignore lawmakers’ questions – if he hadn’t thrown the CIO under the bus, taking no responsibility whatsoever for the lies. In his statement, Pai blamed the former CIO and the Obama administration for providing “inaccurate information about this incident to me, my office, Congress, and the American people.”

He went on to say that the CIO’s subordinates were scared of disagreeing with him and never approached Pai. If all of that is indeed true, the Chairman hasn’t clarified why he wouldn’t demand to see the evidence despite everyone out of the agency already believing that the DDoS claim was nothing but a lie to invalidate the comment process.

“It has become clear that in addition to a flawed comment system, we inherited from the prior Administration a culture in which many members of the Commission’s career IT staff were hesitant to express disagreement with the Commission’s former CIO in front of FCC management. Thankfully, I believe that this situation has improved over the course of the last year. But in the wake of this report, we will make it clear that those working on information technology at the Commission are encouraged to speak up if they believe that inaccurate information is being provided to the Commission’s leadership.”

The statement comes as the result of an independent investigation by the Government Accountability Office that is to be published soon. However, looking at Pai’s statement it is clear what this report is going to say.

As a reminder, the current FCC leadership didn’t only concoct this story of the DDoS attack. It had also tried to bolster its false claims by suggesting that this wasn’t the first such incident as the FCC had suffered a similar attack in 2014 under the former chairman Tom Wheeler. It had also tried to claim that Wheeler had lied about the true nature of the attack back in 2014 to save the agency from embarrassment. The former Chairman then went on record to call on Pai’s FCC for lying to the public as there was no cyberattack under his leadership.

Pai throws CIO under the bus; takes no responsibility

And now it appears the FCC was also lying about the true nature of the failure of comment system in 2017. In his statement released today, Pai is once again blaming [PDF] the Obama administration for feeding him inaccurate information.

I am deeply disappointed that the FCC’s former [CIO], who was hired by the prior Administration and is no longer with the Commission, provided inaccurate information about this incident to me, my office, Congress, and the American people. This is completely unacceptable. I’m also disappointed that some working under the former CIO apparently either disagreed with the information that he was presenting or had questions about it, yet didn’t feel comfortable communicating their concerns to me or my office.

It remains unclear why the new team that replaced Bray nearly a year ago didn’t debunk what is being called a “conspiracy theory” and came clean about it.

Some redacted emails received through the Freedom of Information Act (FOIA) by the American Oversight had previously revealed that the false theory around 2014 cyberattack in order to justify 2017 attack also appeared in a draft copy of a blog post written on behalf of Pai. That draft was never published online to keep Pai’s hands clean since there was no evidence to support FCC’s claims of a malicious attack. These details were then instead sent out to media through which this narrative was publicized.

“The Inspector General Report tells us what we knew all along: the FCC’s claim that it was the victim of a DDoS attack during the net neutrality proceeding is bogus,” FCC Commissioner Jessica Rosenworce wrote. “What happened instead is obvious – millions of Americans overwhelmed our online system because they wanted to tell us how important internet openness is to them and how distressed they were to see the FCC roll back their rights. It’s unfortunate that this agency’s energy and resources needed to be spent debunking this implausible claim.”

Source: https://wccftech.com/fcc-admits-lied-ddos-ajit-pai-obama/

GDPR: A tool for your enemies?st

Every employee at your organisation should be prepared to deal with right to be forgotten requests.

It’s estimated that 75% of employees will exercise their right to erasure now GDPR (General Data Protection Regulation) has come into effect. However, less than half of organisations believe that they would be able to handle a ‘right to be forgotten’ (RTBF) request without any impact on day-to-day business.

These findings highlight the underlying issues we’re seeing in the post-GDPR era and how the new regulations put businesses at risk of being non-compliant. What is also worrying, is that there are wider repercussions for organisations not being prepared to handle RTBF requests.

No matter how well business is conducted, there is always the possibility of someone who holds a grudge against the company and wants to cause disruption to daily operations. One way to do this, without resorting to a standard cyber-attack, is through inundating an organisation with RTBF requests. Especially when the company struggles to complete one request, this can drain a company’s resources and grind the business to a halt. In addition to this, failing to comply with the requests in a timely manner can result in a non-compliance issue – a double whammy.

An unfortunate consequence of the new GDPR regulations is that the right to erasure is free to submit, meaning it is more likely customers or those with a grudge will request to have their data removed. There are two ways this can be requested. The first is a simple opt-out, to remove the name – usually an email address – from marketing campaigns. The other is a more time consuming, complex discovery and removal of all applicable data. It is this second type of request where there is a potential for hacktivists, be-grudged customers, or other cyber-attackers to weaponise the regulation requirement.

One RTBF request is relatively easy to handle – as long as the company knows where its data is stored of course – and the organisation actually has a month to complete the request from the day it was received. However, if a company is inundated with requests coming in on the same or consecutive days, it becomes difficult to manage and has the potential to heavily impact daily operations. This kind of attack is comparable to Distributed Denial of Service (DDoS) attacks – for example the attack on the UK National Lottery last year which saw its entire online and mobile capabilities knocked out for hours because cyber criminals flooded the site with traffic – with companies becoming overloaded with so many requests that it has to stop their services entirely.

When preparing for a flood of RTBF requests, it is essential that all organisations have a plan in place that streamlines processes for discovery and deletion of customer data, making it as easy as possible to complete multiple requests simultaneously.

Don’t let your weakest link be your downfall

The first thing to consider is whether or not the workforce is actually aware of what to do should a RTBF request come in (let alone hundreds). Educating all employees on what to do should a request be made – including who in the company to notify and how to respond to the request – is essential in guaranteeing an organisation is prepared. It will mean that any RTBF request is dealt with both correctly and in a timely manner. The process must also have clearly defined responsibilities and actions able to be audited. For companies with a DPO (Data Protection Officer) or someone who fulfils that role, this is the place to begin this process.

Discovering data is the best defence

The key to efficiency in responding to RTBF requests is discovering the data. This means the team responsible for the completion of requests is fully aware of where all the data for the organisation is stored. Therefore, a complete list of where the data can be found – and how to find it – is crucial. While data in structured storage such as a database or email is relatively simple to locate and action, it is the unstructured data, such as reports and files, which is difficult to find and is the biggest culprit of draining time and resources.

Running a ‘data discovery’ exercise is invaluable in helping organisations achieve an awareness of where data is located, as it finds data on every system and device from laptops and workstations to servers and cloud drives. Only when you know where all critical data is located, can a team assess its ability to delete it and, where applicable, remove all traces of a customer. Repeating the exercise will highlight any gaps and help indicate where additional tools may be required to address the request. Data-At-Rest scanning is frequently found as one part of a Data Loss Prevention (DLP) solution.

Stray data – a ticking time bomb

Knowing where data is stored within the organisation isn’t the end of the journey however. The constant sharing of information with partners and suppliers also has to be taken into account – and for this, understanding the data flow into and out of the company is important. Shared responsibility clauses within GDPR rules means that all partners involved with critical data are liable should a breach happen or a RTBF request cannot be completed. If critical data sitting with a partner is not tracked by the company that received the RTBF request, it makes it impossible to truly complete it and the organisation could face fines of up to 20 million EUR (or 4% of their global turnover). Therefore, it’s even more important to know how and where critical data is moving at all times, minimising the sharing of information to only those who really need to know.

While there is no silver bullet to prevent stray data, there are a number of technologies which can help to control the data which is sent both in and out of a company. Implementing automated solutions, such as Adaptive Redaction and document sanitisation, will ensure that no recipient receives unauthorised critical data. This will build a level of confidence around the security of critical data for both the organisation and the customer.

With the proper processes and technologies in place, dealing with RTBF requests is a straightforward process, whether it is a legitimate request, or an attempt by hacktivists or disgruntled customers to wreak havoc on an organisation. Streamlining data discovery processes and controlling the data flowing in and out of the company will be integral in allowing a business to complete a RTBF request and ultimately defend the organisation against a malicious use of GDPR.

Source: https://www.itproportal.com/features/gdpr-a-tool-for-your-enemies/

Meet MyloBot malware turning Windows devices into Botnet

The IT security researchers at deep learning cybersecurity firm Deep Instinct have discovered a sophisticated malware in the wild targeting Microsoft’s Windows-based computers.

Adding devices to Botnet

The malware works in such a way that upon infecting, it allows hackers to take over the device and make it part of a botnet to carry out different malicious activities including conducting Distributed Denial of Service (DDoS) attacks, spreading malware or infecting the system with ransomware etc.

A Botnet is a network of private computers infected with malicious software and controlled as a group without the owners’ knowledge, e.g., to send spam messages.

Apart from these, the malware not only steals user data, it also disables the anti-virus program and removes other malware installed on the system. Dubbed MyloBot by Deep Instinct; based on its capabilities and sophistication, researchers believe that they have “never seen” such a malware before.

Furthermore, once installed, MyloBot starts disabling key features on the system including Windows Updates, Windows Defender, blocking ports in Windows Firewall, deleting applications and other malware on the system.

“This can result in loss of the tremendous amount of data, the need to shut down computers for recovery purposes, which can lead to disasters in enterprises. The fact that the botnet behaves as a gate for additional payloads, puts the enterprise in risk for the leak of sensitive data as well, following the risk of keyloggers/banking trojans installations,” researchers warned.

Dark Web connection

Further digging of MyloBot sample reveals that the campaign is being operated from the dark web while its command and control (C&C) system is also part of other malicious campaigns.

Although it is unclear how MyloBot is being spread, researchers discovered the malware on one of their clients’ system sitting idle for 14 days which is one of its delaying mechanisms before accessing its command and control servers.

It is not surprising that Windows users are being targeted with MyloBot. Last week, another malware called Zacinlo was caught infecting Windows 10, Windows 7 and Windows 8 PCs. Therefore, if you are a Windows user watch out for both threats, keep your system updated, run a full anti-virus scan, refrain from visiting malicious sites and do not download files from unknown emails.

Deep Instinct is yet to publish research paper covering Mylobot from end to end.

Source: https://www.hackread.com/meet-mylobot-malware-turning-windows-devices-into-botnet/

The platform is under extreme load:’ Cyber attack brings major cryptocurrency exchange to its knee

  • One of the largest cryptocurrency exchanges shut Tuesday morning because of a cyber attack.
  • “The platform is under extreme load,” Bitfinex said at 9:39 a.m. ET.
  • Bitcoin was trading slightly lower at $7,421 a coin, according to Markets Insider data.
 Bitfinex, one of the largest cryptocurrency exchanges by trading volumes, was down Tuesday morning after it experienced a cyber attack.According to its incident page, the exchange shut early Tuesday morning after it experienced problems with its trading engine. For a short period the exchange was back online after the issue was addressed. But the exchange was then hit with a so-called denial-of-service attack, which is when a network of virus-infected computers overwhelm websites with massive amounts of data.

“The platform is under extreme load,” the exchange said at 9:39 a.m. ET. “We are investigating. Seems a DDoS attack was launched soon after we relaunched the platform.”

Still, clients’ funds were not impacted, according to a statement by Kasper Rasmussen, head of marketing at Bitfinex.

“The attack only impacted trading operations, and user accounts and their associated funds/account balances were not at risk at any point during the attack,” Rasmussen said in a statement. “We will continue to update our user base on any further disruptions to service.”

Crypto exchange outages were common at the end of 2017 as bitcoin soared to all-time highs near $20,000, but have been less common in 2018 as prices and volumes across the digital coin market have fallen back to earth.

In 2017, the breakneck growth of the market forced some exchanges to stop onboarding new users altogether. A flash crash at Bitfinex in December left customers demanding answers and refunds.

Hacks and cyber attacks have long been a problem for the crypto space. Notably, Mt. Gox, which was the world’s largest bitcoin exchange, witnessed a massive DDoS attack in 2013. It shut in 2014 after a $450 million hack. JPMorgan estimates that a third of bitcoin exchanges have been hacked.

“Running an exchange is one of the most complex server-side operations out there,” Kyle Samani, a crypto fund manager, told Business Insider.

“On an exchange, everyone wants real time, all the time, globally and the bots are hitting the APIs every few milliseconds both to get order book updates and to trade,” Samani added. “Doing this at scale is much harder than almost any other application.”

Still, Gabor Gurbacs, the director of digital asset strategy at VanEck, told Business Insider he thinks exchanges are getting better at handling technical issues and communicating with clients.

“Recently, exchanges started to halt trading, especially important for margin trades, and provided timely and more transparent notes to customers in cases of service disruptions,” Gurbacs said. “It’s a sign of maturation in my view.”

2018’s less volatile trading environment has given exchanges an opportunity to catch their breath. Bitfinex didn’t experience any technical incidents in the entire month of May.

Bitcoin was trading lower in the aftermath of the DDos attack. The cryptocurrency was down 1.04% at $7,421 a coin, according to Markets Insider data.

Source: http://www.businessinsider.com/bitfinex-hit-by-cyber-attack-2018-6

Hackers replacing volumetric DDoS attacks with “low and slow” attacks

By the middle of last year, organisations across the UK had woken up to the threat of DDoS attacks that had, by November, increased in frequency by a massive 91 percent over Q1 2017 and 35 percent over Q2 figures.

By the middle of last year, organisations across the UK had woken up to the threat of DDoS attacks that had, by November, increased in frequency by a massive 91 percent over Q1 2017 and 35 percent over Q2 figures. A report by CDNetworks in October revealed that more than half of all organisations had ended up as victims of DDoS attacks that regularly took their website, network or online apps down.
To deter cyber-criminals from launching powerful DDoS attacks, organisations began pouring in huge investments to shore up their defences against DDoS attacks. According to CDNetworks, average annual spending on DDoS mitigation in the UK rose to £24,200 last year, with 20 percent of all businesses investing more than £40,000 in the period.
Such investments also resulted in increased confidence amongst businesses in defending against business continuity threats such as DDoS attacks, but unfortunately, increased investments did little to stop the flow of such attacks. Kaspersky Lab’s Global IT Security Risks Survey 2017 noted that the number of DDoS attacks on UK firms doubled since 2016, affecting 33 percent of all firms.
An analysis of DDoS attacks published by Alex Cruz Farmer, security product manager at Cloudflare, has revealed that while organisations in the UK have certainly upped their spending on DDoS mitigation, cyber-criminals are now responding by switching to Layer 7 based DDoS attacks which impact applications and the end-user while ignoring traditional Layer 3 and 4 attacks whose effectiveness is no longer guaranteed. This has ensured the unabated continuance of DDoS attacks on enterprises.
“The key difference to these (Layer 7) attacks is they are no longer focused on using huge payloads (volumetric attacks), but based on Requests per Second to exhaust server resources (CPU, Disk and Memory),” he said, adding that by their very nature, Layer 7 based DDoS attacks, such as credential stuffing and content scraping, do not last too long and do not flood networks with hundreds of gigabytes of junk network traffic per second like traditional DDoS attacks.
Farmer added that Layer 7 based DDoS attacks have become so popular among hackers that Cloudflare detected around 160 attacks occurring each day, with some days spiking up to over 1000 attacks. For example, hackers are frequently carrying out enumeration attacks by identifying expensive operations in apps and hammering at them with bots to tie up resources and slow down or crash such apps. For instance, a database platform was targeted with over 100,000,000 bad requests in just 6 hours!
Indeed, the first signs of short duration yet persistent DDoS attacks were observed in May last year. Imperva Incapsula’s Global DDoS Threat Landscape Report, which analysed more than 17,000 network and application layer DDoS attacks, concluded that 80 percent of DDoS attacks lasted less than an hour, occurred in bursts, and three-quarters of targets suffered repeat assaults, in which 19 percent were attacked 10 times or more.
“These attacks are a sign of the times; launching a DDoS assault has become as simple as downloading an attack script or paying a few dollars for a DDoS-for-hire service. Using these, non-professionals can take a website offline over a personal grievance or just as an act of cyber-vandalism in what is essentially a form of internet trolling,” said Igal Zeifman, Incapsula security evangelist at Imperva to SC Media UK.
Sean Newman, director of Corero Network Security told SC Media UK that reports of increasing application layer DDoS attacks are only to be expected, as attackers continue to look for alternate vectors to meet their objectives.
“A perception that volumetric DDoS attacks are on the decline, is understandable, especially if that is your only lens on the problem.  However, when your view is based on having deployed the latest generation of always-on, real-time, DDoS protection, you will find a rather different story.
““With this lens on the problem, you will find that there is a significantly increasing trend for smaller, more calculated, volumetric DDoS attacks. In fact, Corero customers saw in increase in volumetric attacks of 50 percent compared to a year ago, with over 90 percent of those attacks being less than 5Gbps in size and over 70 percent lasting less than 10 minutes in duration,” he added.
According to Joseph Carson, chief security scientist at Thycotic, organisations are adopting various mitigation techniques to defend against targeted and repeated DDoS attacks, but many a times, such technologies also consume a lot of bandwidth and system memory and thereby interfere with smooth functioning of databases and apps.
“A Target DDoS attack is something that is very challenging to mitigate against though luckily they are periodic meaning as they occur for a short amount of time usually from days to a few weeks. Techniques that are commonly used today are mitigation techniques using Access Control Lists, Rate Limiting and filtering source IP Addresses, though each of these are resource intensive and can prevent legitimate users from getting access to your services.
“A few important lessons can be learned from Estonia’s DDoS experience back in 2007, be very careful as to what mitigation techniques you use as some companies’ responses can be more costly than the DDoS attack itself so always respond to each attack with the appropriate mitigation response.
“Though the best way to really defend and protect against future DDoS attacks is to think in terms of geographic distribution and not have any centrally dependent location of service. Estonia learned this in 2007 and has now distributed itself beyond its own country’s borders using Data Embassies,” he added.
Source: https://www.scmagazineuk.com/hackers-replacing-volumetric-ddos-attacks-with-low-and-slow-attacks/article/767988/

Man Sentenced to 15 Years in Prison for DDoS Attacks, Firearm Charges

A New Mexico man has been sentenced to 15 years in prison for launching distributed denial-of-service (DDoS) attacks on dozens of organizations and for firearms-related charges.

John Kelsey Gammell, 55, used several so-called booter services to launch cyberattacks, including VDoS, CStress, Inboot, Booter.xyz, and IPStresser. His targets included former employers, business competitors, companies that refused to hire him, colleges, law enforcement agencies, courts, banks, and telecoms firms.

Gammell took measures to avoid exposing his real identity online, including through the use of cryptocurrencies to pay for the DDoS attacks and VPNs. However, a couple of taunting emails he sent to his victims during the DDoS attacks – asking if they had any IT issues he could help with – were sent from Gmail and Yahoo addresses that had been accessed from his home IP address.

The man initially rejected a plea deal and his attorney sought the dismissal of the case, but in January he pleaded guilty to one count of conspiracy to commit intentional damage to a protected computer and two counts of being a felon-in-possession of a firearm. Gammell, a convicted felon, admitted having numerous firearms and hundreds of rounds of ammunition.

In addition to the 180-month prison sentence, Gammell will have to pay restitution to victims of his DDoS attacks, but that amount will be determined at a later date.

Source: https://www.securityweek.com/man-sentenced-15-years-prison-ddos-attacks-firearm-charges

Danish Railway Company DSB Suffers DDoS Attack

Danish rail travelers found buying a ticket difficult yesterday, following a DDoS attack on the railway company DSB.

DSB has more than 195 million passengers every year but, as reported by The Copenhagen Post, the attack on Sunday made it impossible for customers to purchase a ticket via the DSB app, on the website, at ticket machines and certain kiosks at stations – though passengers were able to buy tickets from staff on trains.

“We have all of our experts on the case,” said DSB spokesperson Aske Wieth-Knudsen, with all systems apparently working as normal this morning.

“The DDoS attack seen in Denmark this weekend on critical national infrastructure is precisely the type of attack that EU Governments are seeking to protect citizens against with last week’s introduction of the Network and Information Systems Directive (NIS),” said Andrew Lloyd, president, Corero Network Security.

“Keeping the control systems (e.g. railway signaling, power circuits and track movements) secure greatly reduces the risk of a catastrophic outcome that risks public safety. That said, a successful attack on the more vulnerable management systems can cause widespread disruption. This DDoS attack on Danish railways ticketing site can be added to a growing list of such cyber-attacks that include last October’s DDoS attack on the Swedish Railways that took out their train ordering system for two days resulting in travel chaos.

The lessons are clear, Lloyd added; transportation companies and other operators of essential services have to invest in proactive cybersecurity defenses to ensure that their services can stay online and open for business during a cyber-attack.

Source: https://www.infosecurity-magazine.com/news/danish-railway-ddos-attack/