Concern Mounts for SS7, Diameter Vulnerability

The same security flaws that cursed the older SS7 standard and were used with 3G, 2G and earlier are prevalent in the Diameter protocol used with today’s 4G (LTE) telephony and data transfer standard, according to researchers at Positive Technologies and the European Union Agency For Network and Information Security (ENISA).

Network security is built on trust between operators and IPX providers, and the Diameter protocol that replaced SS7 was supposed to be an improved network signaling protocol. But when 4G operators misconfigure the Diameter protocol, the same types of vulnerabilities still exist.

“As society continues to leverage mobile data capabilities more and more heavily, from individual users performing more tasks directly on their smartphones, to IoT devices which use it when regular network connections are not available (or not possible), service providers need to take the security of this important communications channel more seriously,” said Sean Newman, director of product management for Corero Network Security.

Given that the Diameter protocols are slated to be used in 5G, reports of critical security capabilities not being enabled in the Diameter protocol used for 4G mobile networks are worrisome. Of particular concern is the potential that misconfigurations that lead to the vulnerability could result in distributed denial of service (DDoS) attacks for critical infrastructure relying on mobile access. An attacker would not need to harness any large-scale distributed attack capabilities.

“The latest generation of denial of service protection solutions are critical for any organization that relies on always-on internet availability, but this can only be effective if service providers are ensuring the connectivity itself is always-on,” Newman said.

Concerns over the threats from smartphones have even been presented to Congress with pleas that they should act immediately to protect the nation from cybersecurity threats in SS7 and Diameter.

“SS7 and Diameter were designed without adequate authentication safeguards. As a result, attackers can mimic legitimate roaming activity to intercept calls and text messages, and can imitate requests from a carrier to locate a mobile device. Unlike cell-site simulator attacks, SS7 and Diameter attacks do not require any physical proximity to a victim,” wrote Jonathan Mayer, assistant professor of computer science and public affairs, Princeton University, in his testimony before the Committee on Science, Space, and Technology of 27 June.

Source: https://www.infosecurity-magazine.com/news/concern-mounts-for-ss7-diameter/

GDPR: A tool for your enemies?st

Every employee at your organisation should be prepared to deal with right to be forgotten requests.

It’s estimated that 75% of employees will exercise their right to erasure now GDPR (General Data Protection Regulation) has come into effect. However, less than half of organisations believe that they would be able to handle a ‘right to be forgotten’ (RTBF) request without any impact on day-to-day business.

These findings highlight the underlying issues we’re seeing in the post-GDPR era and how the new regulations put businesses at risk of being non-compliant. What is also worrying, is that there are wider repercussions for organisations not being prepared to handle RTBF requests.

No matter how well business is conducted, there is always the possibility of someone who holds a grudge against the company and wants to cause disruption to daily operations. One way to do this, without resorting to a standard cyber-attack, is through inundating an organisation with RTBF requests. Especially when the company struggles to complete one request, this can drain a company’s resources and grind the business to a halt. In addition to this, failing to comply with the requests in a timely manner can result in a non-compliance issue – a double whammy.

An unfortunate consequence of the new GDPR regulations is that the right to erasure is free to submit, meaning it is more likely customers or those with a grudge will request to have their data removed. There are two ways this can be requested. The first is a simple opt-out, to remove the name – usually an email address – from marketing campaigns. The other is a more time consuming, complex discovery and removal of all applicable data. It is this second type of request where there is a potential for hacktivists, be-grudged customers, or other cyber-attackers to weaponise the regulation requirement.

One RTBF request is relatively easy to handle – as long as the company knows where its data is stored of course – and the organisation actually has a month to complete the request from the day it was received. However, if a company is inundated with requests coming in on the same or consecutive days, it becomes difficult to manage and has the potential to heavily impact daily operations. This kind of attack is comparable to Distributed Denial of Service (DDoS) attacks – for example the attack on the UK National Lottery last year which saw its entire online and mobile capabilities knocked out for hours because cyber criminals flooded the site with traffic – with companies becoming overloaded with so many requests that it has to stop their services entirely.

When preparing for a flood of RTBF requests, it is essential that all organisations have a plan in place that streamlines processes for discovery and deletion of customer data, making it as easy as possible to complete multiple requests simultaneously.

Don’t let your weakest link be your downfall

The first thing to consider is whether or not the workforce is actually aware of what to do should a RTBF request come in (let alone hundreds). Educating all employees on what to do should a request be made – including who in the company to notify and how to respond to the request – is essential in guaranteeing an organisation is prepared. It will mean that any RTBF request is dealt with both correctly and in a timely manner. The process must also have clearly defined responsibilities and actions able to be audited. For companies with a DPO (Data Protection Officer) or someone who fulfils that role, this is the place to begin this process.

Discovering data is the best defence

The key to efficiency in responding to RTBF requests is discovering the data. This means the team responsible for the completion of requests is fully aware of where all the data for the organisation is stored. Therefore, a complete list of where the data can be found – and how to find it – is crucial. While data in structured storage such as a database or email is relatively simple to locate and action, it is the unstructured data, such as reports and files, which is difficult to find and is the biggest culprit of draining time and resources.

Running a ‘data discovery’ exercise is invaluable in helping organisations achieve an awareness of where data is located, as it finds data on every system and device from laptops and workstations to servers and cloud drives. Only when you know where all critical data is located, can a team assess its ability to delete it and, where applicable, remove all traces of a customer. Repeating the exercise will highlight any gaps and help indicate where additional tools may be required to address the request. Data-At-Rest scanning is frequently found as one part of a Data Loss Prevention (DLP) solution.

Stray data – a ticking time bomb

Knowing where data is stored within the organisation isn’t the end of the journey however. The constant sharing of information with partners and suppliers also has to be taken into account – and for this, understanding the data flow into and out of the company is important. Shared responsibility clauses within GDPR rules means that all partners involved with critical data are liable should a breach happen or a RTBF request cannot be completed. If critical data sitting with a partner is not tracked by the company that received the RTBF request, it makes it impossible to truly complete it and the organisation could face fines of up to 20 million EUR (or 4% of their global turnover). Therefore, it’s even more important to know how and where critical data is moving at all times, minimising the sharing of information to only those who really need to know.

While there is no silver bullet to prevent stray data, there are a number of technologies which can help to control the data which is sent both in and out of a company. Implementing automated solutions, such as Adaptive Redaction and document sanitisation, will ensure that no recipient receives unauthorised critical data. This will build a level of confidence around the security of critical data for both the organisation and the customer.

With the proper processes and technologies in place, dealing with RTBF requests is a straightforward process, whether it is a legitimate request, or an attempt by hacktivists or disgruntled customers to wreak havoc on an organisation. Streamlining data discovery processes and controlling the data flowing in and out of the company will be integral in allowing a business to complete a RTBF request and ultimately defend the organisation against a malicious use of GDPR.

Source: https://www.itproportal.com/features/gdpr-a-tool-for-your-enemies/

Small businesses aren’t properly prepared for cyberattacks

Even though businesses all over the world are increasingly taking online protection seriously – they still aren’t 100 per cent confident they could tackle serious cybersecurity threats.

Polling 600 businesses in the US, UK and Australia, a study by Webroot found that new types of attacks are dominating in 2018 (compared to the year before) but that the cost of a breach is decreasing, as well.

Phishing has taken the number one spot as the most dangerous type of attack, from malware. Ransomware is also up, from fifth to third, mostly thanks to the large success of WannaCry.

With 25 per cent on a global scale, insider threats seem to be least dangerous of the bunch.

When it comes to the UK in particular, ransomware is the biggest threat. SMBs are far less concerned about DDoS attacks in the UK, compared to their US counterparts, too.

The report has also taken a closer look at training and uncovered that even though almost all businesses do conduct training to teach their staff about cybersecurity, this training isn’t continuous. This leads to the next stat, 79 per cent can’t say they are “completely ready to manage IT security and protect against threats.”

“As our study shows, the rise of new attacks is leaving SMBs feeling unprepared,” commented Charlie Tomeo, vice president of worldwide business sales, Webroot.

“One of the most effective strategies to keep your company safe is with a layered cybersecurity strategy that can secure users and their devices at every stage of an attack, across every possible attack vector.”

Source: https://www.itproportal.com/news/small-businesses-arent-prepared-for-cyberattacks/

Protonmail Hit By Yet Another DDoS Attack

Attack comes as scale, scope and sophistication of DDoS attacks rises sharply

Popular encrypted email provider Protonmail was this morning hit by the latest in a long-running serious of malicious attacks on its infrastructure.

The privacy-focussed Geneva-based email provider, which has some 500,000 users, has faced numerous DDoS attacks since being founded.

As one of the only email providers which owns and manages all of its servers and network components such as routers and switches, it is in a unique position – particularly since the company is its own internet service provider.

 

 

 

 

 

 

 

In 2015 its servers were hit with a 50Gbps wall of “junk data” that threatened to torpedo the company.

After initially paying a ransom following an attack that took its main data centre offline, the company faced a further week-long assault from another adversary that targeted 15 different ISP nodes simultaneously, then attacked all the ISPs going into the datacentre using a wide range of sophisticated tactics.

No ransom nor responsibility claim was made.

The company, born from work done at CERN, has since partnered with DDoS protection specialists, Israel-headquartered Radware, and uses BGP redirection and GRE tunnels to defend itself. Today’s attack slowed email delivery and its VPN for several hours, but did not result in the loss of any emails, Protonmail said.

“Our network was hit by a DDoS attack that was unlike the more ‘generic’ DDoS attacks that we deal with on a daily basis. As a result, our upstream DDoS protection service (Radware) needed more time than usual to perform mitigation,” a ProtonMail spokesperson wrote in an email. ”

“Radware is making adjustments to their DDoS protection systems to better mitigate against this type of attack in the future. While we don’t yet have our own measurement of the attack size, we have traced the attack back to a group that claims to have ties to Russia, and the attack is said to have been 500 Gbps, which would be among the largest DDoS’s on record,” the spokesperson wrote.

Carl Herberger, Vice President for Security Solutions at Radware, earlier noted: “Corporations need to understand the severity of the Advanced Persistent DoS attacks, such as SMTP DoS, and review their security measures”.

“APDoS is akin to the way bomber aircraft would jam radar systems many years ago – the type of attack is so varied and frequent that it becomes near impossible to detect them all, and more importantly difficult to mitigate them without impacting your legitimate web traffic.”

DDoS Attacks Continue to Rise

The attack comes after a new report from Akamai revealed that there was a 16 percent increase in the number of DDoS attacks recorded since last year, with the largest DDoS attack of the year setting a new record at 1.35 Tbps by using a memcached reflector attack.

Akamai said in its State of the Internet report: “To understand the scale of such an attack, it helps to compare it to the intercontinental undersea cables in use today. The TAT-14 cable, one of many between the US and Europe, is capable of carrying 3.2 Tbps of traffic, while the Japan-Guam-Australia cable, currently under construction, will be capable of 36 Tbps. Neither of these hugely important cables would have been completely swamped by February’s attack, but an attack of that magnitude would have made a significant impact on intercontinental traffic, if targeted correctly.”

The company’s researchers also identified a four percent increase in reflection-based DDoS attacks since last year and a 38 percent increase in application-layer attacks such as SQL injection or cross-site scripting.

Source: https://www.cbronline.com/news/protonmail-ddos

How to Prevent DDoS Attacks: 6 Tips to Keep Your Website Safe

Falling victim to a distributed denial of service (DDoS) attack can be catastrophic: The average cost to an organization of a successful DDoS attack is about $100,000 for every hour the attack lasts, according to security company Cloudflare.

There are longer term costs too: loss of reputation, brand degradation and lost customers, all leading to lost business. That’s why it is worth investing significant resources to prevent a DDoS attack, or at least minimize the risk of falling victim to one, rather than concentrating on how to stop a DDoS attack once one has been started.

In the first article in this series, we discussed how to stop DDoS attacks. If you’re fortunate enough to have survived an attack – or are simply wise enough to think ahead – we will now address preventing DDoS attacks.

Understanding DDoS attacks

A basic volumetric denial of service (DoS) attack often involves bombarding an IP address with large volumes of traffic. If the IP address points to a Web server, legitimate traffic will be unable to contact it and the website becomes unavailable. Another type of DoS attack is a flood attack, where a group of servers are flooded with requests that need processing by the victim machines. These are often generated in large numbers by scripts running on compromised machines that are part of a botnet, and result in exhausting the victim servers’ resources such as CPU or memory.

A DDoS attack operates on the same principles, except the malicious traffic is generated from multiple sources, although orchestrated from one central point. The fact that the traffic sources are distributed – often throughout the world – makes DDoS attack prevention much harder than preventing DoS attacks originating from a single IP address.

Another reason that preventing DDoS attacks is a challenge is that many of today’s attacks are “amplification” attacks. These involve sending out small data packets to compromised or badly configured servers around the world, which then respond by sending much larger packets to the server under attack. A well-known example of this is a DNS amplification attack, where a 60 byte DNS request may result in a 4,000 byte response being sent to the victim – an amplification factor of around 70 times the original packet size.

More recently, attackers have exploited a server feature called memcache to launch memcached amplification attacks, where a 15 byte request can result in a 750 kb response, a amplification factor of more than 50,000 times the original packet size. The world’s largest ever DDoS attack, launched against Github in earlier this year, was a memcached amplification attack that peaked at 1.35 Tbps of data hitting Github’s servers.

The benefit to malicious actors of amplification attacks is that they need only a limited amount of bandwidth at their disposal to launch far larger attacks on their victims than they could do by attacking the victims directly.

Six steps to prevent DDoS attacks

1. Buy more bandwidth

Of all the ways to prevent DDoS attacks, the most basic step you can take to make your infrastructure “DDoS resistant” is to ensure that you have enough bandwidth to handle spikes in traffic that may be caused by malicious activity.

In the past it was possible to avoid DDoS attacks by ensuring that you had more bandwidth at your disposal than any attacker was likely to have. But with the rise of amplification attacks, this is no longer practical. Instead, buying more bandwidth now raises the bar which attackers have to overcome before they can launch a successful DDoS attack, but by itself, purchasing more bandwidth is not a DDoS attack solution.

2. Build redundancy into your infrastructure

To make it as hard as possible for an attacker to successfully launch a DDoS attack against your servers, make sure you spread them across multiple data centers with a good load balancing system to distribute traffic between them. If possible, these data centers should be in different countries, or at least in different regions of the same country.

For this strategy to be truly effective, it’s necessary to ensure that the data centers are connected to different networks and that there are no obvious network bottlenecks or single points of failure on these networks.

Distributing your severs geographically and topographically will make it hard for an attacker to successfully attack more than a portion of your servers, leaving other servers unaffected and capable of taking on at least some of the extra traffic that the affected servers would normally handle.

3. Configure your network hardware against DDoS attacks

There are a number of simple hardware configuration changes you can take to help prevent a DDoS attack.

For example, configuring your firewall or router to drop incoming ICMP packets or block DNS responses from outside your network (by blocking UDP port 53) can help prevent certain DNS and ping-based volumetric attacks.

4. Deploy anti-DDoS hardware and software modules

Your servers should be protected by network firewalls and more specialized web application firewalls, and you should probably use load balancers as well. Many hardware vendors now include software protection against DDoS protocol attacks such as SYN flood attacks, for example, by monitoring how many incomplete connections exist and flushing them when the number reaches a configurable threshold value.

Specific software modules can also be added to some web server software to provide some DDoS prevention functionality. For example, Apache 2.2.15 ships with a module called mod_reqtimeout to protect itself against application-layer attacks such as the Slowloris attack, which opens connections to a web server and then holds them open for as long as possible by sending partial requests until the server can accept no more new connections.

5. Deploy a DDoS protection appliance

Many security vendors including NetScout Arbor, Fortinet, Check Point, Cisco and Radware offer appliances that sit in front of network firewalls and are designed to block DDoS attacks before they can take effect.

They do this using a number of techniques, including carrying out traffic behavioral baselining and then blocking abnormal traffic, and blocking traffic based on known attack signatures.

The main weakness of this type of approach of preventing DDoS attacks is that the appliances themselves are limited in the amount of traffic throughput they can handle. While high-end appliances may be able to inspect traffic coming in at a rate of up to 80 Gbps or so, today’s DDoS attacks can easily be an order of magnitude greater than this.

6. Protect your DNS servers

Don’t forget that a malicious actor may be able to bring your web servers offline by DDoSing your DNS servers. For that reason it is important that your DNS servers have redundancy, and placing them in different data centers behind load balancers is also a good idea. A better solution may even be to move to a cloud-based DNS provider that can offer high bandwidth and multiple points-of-presence in data centers around the world. These services are specifically designed with DDoS prevention in mind. For more information, see How to Prevent DNS Attacks.

Source: https://www.esecurityplanet.com/network-security/how-to-prevent-ddos-attacks.html

Hospitality industry under siege from botnets

The hospitality industry, including hotels, airlines and cruise lines, is the biggest target for cyber criminal botnet attacks that abuse credentials and overwhelm online systems, a report reveals

Cyber security defenders face increasing threats from bot-based credential abuse targeting the hospitality industry, a report shows.

Bot-based attacks are also being used for advanced distributed denial of service (DDoS) attacks, according to the Summer 2018 state of the internet/security: web attack report by Akamai Technologies.
The report is based on attack data from across Akamai’s global infrastructure and represents the research of a diverse set of teams throughout the company.

Analysis of current cyber attack trends for the six months from November 2017 to April 2018 reveals the importance of maintaining agility not only by security teams, but also by developers, network operators and service providers in order to mitigate new threats, the report said.

The use of bots to abuse stolen credentials continues to be a major risk for internet-driven businesses, but Akamai’s data revealed that the hospitality industry experiences many more credential abuse attacks than other sectors.

Akamai researchers analysed nearly 112 billion bot requests and 3.9 billion malicious login attempts that targeted sites in this industry. Nearly 40% of the traffic seen across hotel and travel sites is classified as “impersonators of known browsers”, which is a common technique used by cyber fraudsters.

Geographic analysis of attack traffic origination revealed that Russia, China and Indonesia were major sources of credential abuse for the travel industry during the period covered by the report, directing about half of their credential abuse activity at hotels, cruise lines, airlines, and travel sites. Attack traffic origination against the hospitality and travel industry from China and Russia combined was three times the number of attacks originating in the US.

“These countries have historically been large centres for cyber attacks, but the attractiveness of the hospitality industry appears to have made it a significant target for hackers to carry out bot-driven fraud,” said Martin McKeay, senior security advocate at Akamai and senior editor of the report.

While simple volumetric DDoS attacks continued to be the most common method used to attack organisations globally, the report said other techniques have continued to appear. Akamai researchers identified and tracked advanced techniques that show the influence of intelligent, adaptive enemies who change tactics to overcome the defences in their way.

One of the attacks mentioned in the report came from a group that coordinated its attacks over group chats on Steam digital distribution platform and IRC (internet relay chat). Rather than using a botnet of devices infected with malware to follow hacker commands, these attacks were carried out by a group of human volunteers.

Another notable attack overwhelmed the target’s DNS (domain name system) server with bursts lasting several minutes instead of using a sustained attack against the target directly. This added to the difficulty of mitigating the attack because of the sensitivity of DNS servers, which allows outside computers to find them on the internet. The burst system also increased difficulty for defenders by tiring them out over a long period of time.

“Both of these attack types illustrate how attackers are always adapting to new defences to carry out their nefarious activities,” said McKeay. “These attacks, coupled with the record-breaking 1.35Tbps memcached attacks from earlier this year, should serve as a not-so-gentle reminder that the security community can never grow complacent.”

Other key findings of the report include a 16% increase in the number of DDoS attacks recorded since 2017. Researchers identified a 4% increase in reflection-based DDoS attacks since 2017 and a 38% rise in application-layer attacks such as SQL injection or cross-site scripting.

The report also noted that in April 2018, the Dutch National High Tech Crime Unit took down a malicious DDoS-for-hire website with 136,000 users.

Source: https://www.computerweekly.com/news/252443696/Hospitality-industry-under-siege-from-botnets

Cyber security incidents could cost Aussie businesses $29B per year

Fear and doubt of cyber risks has led 66 per cent of Australian businesses to put off digital transformation plans, with security incidents potentially costing organisations $29 billion per year.

In research conducted by Frost & Sullivan and commissioned by Microsoft, local security incidents include losses in revenue, decreased profitability, fines, lawsuits and remediation.

“The fact that two-thirds of Australian organisations are putting off digital transformation efforts is concerning, when you consider that digital transformation is expected to contribute $45 billion to Australia’s economy by 2021,” Microsoft director of corporate legal and external affairs Tom Daemen said.

“To combat this, we need to be instilling a data culture throughout organisations. Data management needs to be prioritised in the boardroom as a strategic focus.

“Not only will this ensure organisations comply with Australian Notifiable Data Breaches Act and European GDPR legislation, but it will empower employees to see data as the strategic asset it is – and push forward with digital transformation initiatives.”

The study, Understanding the Cybersecurity Threat Landscape in Asia Pacific: Securing the Modern Enterprise in a Digital World, revealed that a large-sized organisation (over 500 employees) in Australia can incur an economic loss of $35.9 million if a breach occurs.

The economic loss is calculated from direct costs, indirect costs (including customer churn and reputation damage) as well as induced costs (the impact of cyber breach to the broader ecosystem and economy, such as the decrease in consumer and enterprise spending).

A total of 1,300 executives were interviewed for this study in Australia, China, Hong Kong, Indonesia, India, Japan, Korea, Malaysia, New Zealand, Philippines, Singapore, Taiwan and Thailand.

According to findings, more than half of the organisations surveyed in Australia, or 55 per cent, have experienced a cyber security incident in the last five months while one in five companies are not sure if they have had one or not as they have not performed proper forensics or a data breach assessment.

“The number of organisations that have experienced a cyber security incident, although large, is not particularly surprising given the increased rate of cyber security attacks we’re seeing annually,” Daemen said.

“However, the finding that one in five Australian businesses are not performing regular forensics and data breach assessments is surprising given the frequency of attacks and suggests a need for greater awareness and a cultural shift in how we manage and think about data.”

Artificial intelligence (AI) is being adopted by businesses in order to improve their cyber security.

In fact, the study found that 84 per cent of Australian organisations have either adopted or are looking to adopt an AI approach towards boosting cyber security.

Although ransomware and DDoS attacks have dominated headlines in recent times, the study found that online brand impersonation, remote code execution and data corruption are actually the bigger concern as they have the highest impact on business with the slowest recovery time.

According to data collected in 2017, email scams cost Australian businesses losses of $22.1 million last year, according to the combined scams reported to both the ACCC and ACORN.

ACCC’s Scamwatch alone received 5,432 reports scams from Australian businesses in 2017 with 60 per cent being delivered via email and money being sent to scammers via bank transfers 85 per cent of the time – total losses from those scams amount to $4.6 million.

Source: https://www.arnnet.com.au/article/642959/cyber-security-could-cost-aussie-businesses-29b-per-year/

The Lesson of the GitHub DDoS Attack: Why Your Web Host Matters

Surviving a cyberattack isn’t like weathering a Cat 5 hurricane or coming through a 7.0 earthquake unscathed. Granting that natural disasters too often have horrendous consequences, there’s also a “right place, right time” element to making it through. Cyber-disasters – which can be every bit as calamitous in their own way as acts of nature – don’t typically bend to the element of chance. If you come out the other side intact, it’s probably no accident. It is, instead, the result of specific choices, tools, policies and practices that can be codified and emulated – and that need to be reinforced.

Consider the recent case of GitHub, the target of the largest DDoS attack ever recorded. GitHub’s experience is instructive, and perhaps the biggest takeaway can be expressed in four simple words: Your web host matters.

That’s especially crucial where security is concerned. Cloud security isn’t like filling out a job application; it’s not a matter of checking boxes and moving on. Piecemeal approaches to security simply don’t work. Patching a hole or fixing a bug, and then putting it “behind” you – that’s hardly the stuff of which effective security policies are made. Because security is a moving target, scattershot repairs ignore the hundreds or even thousands of points of vulnerability that a policy of continuing monitoring can help mitigate.

Any cloud provider worth its salt brings to the task a phalanx of time-tested tools, procedures and technologies that ensure continuous uptime, regular backups, data redundancy, data encryption, anti-virus/anti-malware deployment, multiple firewalls, intrusion prevention and round-the-clock monitoring. So while data is considerably safer in the cloud than beached on equipment under someone’s desk, there is no substitute for active vigilance – accent on active, since vigilance is both a mindset and a verb. About that mindset: sound security planning requires assessing threats, choosing tools to meet those threats, implementing those tools, assessing the effectiveness of the tools implemented – and repeating this process on an ongoing basis.

Among the elements of a basic cybersecurity routine: setting password expirations, obtaining certificates, avoiding the use of public networks, meeting with staff about security, and so on. Perfection in countering cyberattacks is as elusive here as it is in any other endeavor. Even so, that can’t be an argument for complacence or anything less than maximum due diligence, backed up by the most capable technology at each organization’s disposal.

In this of events is a counterintuitive lesson about who and what is most vulnerable during a hack. The experience of public cloud providers should put to rest the notion that the cloud isn’t safe. GitHub’s experience makes a compelling argument that the cloud is in fact the safest place to be in a cyber hurricane. Internal IT departments, fixated on their own in-house mixology, can be affected big-time – as they were in a number of recent ransomware attacks — raising the very legitimate question of why some roll-your-own organizations devote precious resources, including Bitcoin, to those departments in the belief that the cloud is a snakepit.

Cloud security isn’t what it used to be – and that’s a profound compliment to the cloud industry’s maturity and sophistication. What once was porous is now substantially better in every way, which isn’t to deny that bad actors have raised their game as well. Some aspects of cloud migration have always been threatening to the old guard. Here and there, vendors and other members of the IT community have fostered misconceptions about security in the cloud – not in an effort to thwart migration but in a bid to control it. Fear fuels both confusion and dependence.

Sadly, while established cloud security protocols should be standard-issue stuff, they aren’t. The conventional wisdom is that one cloud hosting company is the same as another, and that because they’re committed to life off-premises, they all must do the exact same thing, their feature sets are interchangeable, and the underlying architecture is immaterial. The message is, it doesn’t matter what equipment they’re using — it doesn’t matter what choice you make. But in fact, it does. Never mind the analysts; cloud computing is not a commodity business. And never mind the Street; investors and Certain Others fervently want it to be a commodity, but because those Certain Others go by the name of Microsoft and Amazon, fuzzing the story won’t fly. They want to grab business on price and make scads of money on volume (which they are).

The push to reduce and simplify is being driven by a combination of marketing gurus who are unfamiliar with the technology and industry pundits who believe everything can be plotted on a two-dimensional graph. Service providers are trying to deliver products that don’t necessarily fit the mold, so it’s ultimately pointless to squeeze technologies into two or three dimensions. These emerging solutions are much more nuanced than that.

Vendors need to level with users. The devil really is in the details. There are literally hundreds of decisions to make when architecting a solution, and those choices mean that every solution is not a commodity. Digital transformation isn’t going to emerge from some marketing contrivance, but from technologies that make cloud computing more secure, more accessible and more cost-effective.

Source: https://hostingjournalist.com/expert-blogs/the-lesson-of-the-github-ddos-attack-why-your-web-host-matters/

How CIA can improve your cyber security

The threat of cyber-attack is increasing every year.

According to the Online Trust Alliance, 2017 was the worst yet in terms attacks on business. Figures indicate that attacks doubled from 82,000 incidents in 2016 to over 159,000 – and that’s just the ones we know about.

Keeping up to date with the latest cyber security threats is an almost impossible task. The time between vulnerability disclosure and attack launch is getting shorter all the time, and it’s easy for a hacker to change a line of code in the program, and then fire off another (ever so slightly different) attack.

Just to prove the point, in 2016, ransomware peaked at 40,000 attacks a day, with over 400,000 variations found. Imagine trying to keep on top of all that?

Effective cyber security is knowing what’s important to you and protecting it to the best of your abilities. Think of it in three elements – the CIA triad:

  • Confidentiality
  • Integrity
  • Availability

Confidentiality – who really needs access to the information?

Confidentiality is all about privacy and works on the basis of ‘least privilege’. Only those who require access to specific information should be granted it, and measures need to be put in place to ensure sensitive data is prevented from falling into the wrong hands.

The more critical the information, the stronger the security measures need to be.

Measures that support confidentiality can include data encryption, IDs and passwords, two-factor authentication, biometric verification, air-gapped systems (physically isolating a secure computer network from unsecured networks such as the public internet) or even disconnected devices for the most sensitive of information.

Integrity – how do you ensure the accuracy of your data?

The integrity of your information is essential, and organisations need to take the necessary steps to ensure that it remains accurate throughout its entire life cycle, whether at rest or during transit.

Access privileges and version control are always useful to prevent unwanted changes or deletion of your information. Back-ups should be taken at regular intervals to ensure that any data can be restored.

When it comes to integrity of information in transit, one-way hashes – an algorithm that turns messages or text into a fixed string of digits, making it nearly impossible to derive the original text from the string – can be utilised to ensure that the data has remained unchanged.

Availability – how do you keep your business up and running?

Keeping your business operational is critical and you need to ensure that those who need access to hardware, software, equipment or even information can maintain this access at any time.

Disaster planning is essential for this and organisations need to plan ahead to prevent any loss of availability, should the worst happen.

Examples of disaster planning include preparing to deal with cyber-attacks (such as DDoS), data centre power loss or even potential natural disasters.

Getting the combination right

All three of the CIA elements listed above are required to ensure you remain protected. If one aspect fails, it could provide a way in for hackers to compromise your network and your data.

However, the mix between the three elements is down to the individual company, the project or asset it is being deployed on. Some companies may value confidentiality above all, others may place most value on availability.

Whatever the combination, it’s important that the CIA triad is considered at all times and by doing so you protect your organisation against a range of threats, without having to spend too much time keeping up with the latest threats.

Source: http://www.businesscloud.co.uk/opinion/how-cia-can-improve-your-cyber-security

Cyber attack warnings highlight need to be prepared

Fresh warnings about the vulnerability of national infrastructure to cyber attacks show the need for securing and monitoring associated control systems connected to the internet.

The commander of Britain’s Joint Forces Command has warned that UK traffic control systems and other critical infrastructure could be targeted by cyber adversaries – but industry experts say this is nothing new and something organisations should be preparing for.

According to Christopher Deverell, these systems could be targeted by countries such as Russia. “There are many potential angles of attack on our systems,” he told the BBC’s Today programme.

Other vulnerable control systems that are connected to the internet are used in power stations, for air traffic control and for rail and other transport systems.

Sean Newman, director at Corero Network Security, said there is nothing new in the claims. “The potential for such attacks has been growing for several years as more systems become connected,” he said.

“There are many good reasons for connecting operational and information networks, including efficiency and effectiveness. However, this opens up operational controls to potential attacks from across the internet, where previously they were completely isolated and only accessible from the inside.”

According to Newman, the question is no longer whether such attacks are theoretically possible, but who is bold enough to carry out such assaults and risk the likely repercussions.

“It is reasonable to assume that it’s more a matter of time than if, so the operators of such systems need to be fully cognisant of the potential risks and deploy all reasonable protection to minimise it,” he said.

“This includes preventing remote access to such systems, as well as real-time defences against DDoS [distributed denial of service] attacks which could disrupt their operation or prevent legitimate access for operation and control purposes.”

Andrea Carcano, chief product officer at Nozomi Networks, said the reality is that the UK’s infrastructure, and those in every developed country around the world, is being continually poked and probed, not just by nation states but by criminals, hacktivists and even curious hobbyists.

“We have seen the damage that can be done from hacks in the Ukraine, where attackers were able to compromise systems and turn the lights out,” he said. “With each incursion, both successful and those that are thwarted, the attackers will learn what has worked, what hasn’t, and what can be improved for the next attempt.

“The challenge for those charged with protecting our critical infrastructure is visibility, as you can’t protect what you don’t know exists.”
According to Carcano, 80% of the industrial facilities Nozomi visits do not have up-to-date lists of assets or network diagrams.

“Ironically, this doesn’t pose a problem to criminals who are using readily available open source tools to query their targets and build a picture of what makes up their network environment and is potentially vulnerable – be it a power plant, factory assembly line, or our transport infrastructure,” he said.

Nozomi researchers created a security testing and fuzzing tool, using open source software, that is capable of automatically finding vulnerabilities in proprietary protocols used by industrial control system (ICS) devices.

“Using just this tool, and in a limited time period, they identified eight zero-day vulnerabilities that, if exploited, could be used to shut down the controllers, making the devices unmanageable, and even potentially corrupt normal processes, which could be extremely serious or even fatal,” said Carcano.

“As the cyber security risk to critical infrastructure and manufacturing organisations increases, it is important for enterprises to actively monitor and secure operational technology [OT] networks. An important aspect of this is having complete visibility to OT networks and assets and their cyber security and process risks.”

However, Deverell suggested that as well as making sure cyber security is continually improving, the UK should also have an offensive capability to respond to attacks on critical infrastructure if necessary, reports The Telegraph.

His comments echo those by UK attorney general Jeremy Wright, who recently suggested that the UK has a legal right to retaliate against aggressive cyber attacks in the same way as it would to armed attacks.

“Cyber operations that result in, or present, an imminent threat of death and destruction on an equivalent scale to an armed attack will give rise to an inherent right to take action in self defence,” he said.

According to Wright, if a hostile state interfered with the operation of one of the UK’s nuclear reactors, resulting in the widespread loss of life, the fact that the act was carried out via a cyber operation does not prevent it from being viewed as an unlawful use of force or an armed attack.
“States that are targeted by hostile cyber operations have the right to respond to those operations in accordance with the options lawfully available to them,” he said.

The UK has previously indicated that it is building cyber-offensive capabilities, but in January 2018, Ciaran Martin, head of the National Cyber Security Centre (NCSC), said that while this will be an “increasing part of the UK’s security toolkit”, a cyber attack would not necessarily trigger a retaliatory cyber attack, but a range of responses would be considered, including sanctions.

Commenting on calls by UK defence chief of general staff Nick Carter for increased defence spending to help the country keep up with its adversaries, particularly in light of the fact that cyber attacks that target military and civilian operations are one of the biggest threats facing the country, Martin confirmed that some of these attacks were aimed at identifying vulnerabilities in infrastructure for potential future disruption, but added that there had been no successful attacks on UK infrastructure.

A report by the Kosciuszko Institute, published in January, predicts that 2018 could be a year of cyber attacks on critical infrastructure.

In the report, Paul Timmers, an academic at Oxford University and former director of the European Commission’s Sustainable & Secure Society Directorate, noted that attacks on systems that are crucial for the functioning of the state and society, including logistics, health and energy, date from 2016.

Timmers believes that the risk of attacks in 2018 may spread to other sectors of the economy, such as transport. An important element of the potential incidents, he said, will be their predicted international and cross-sector nature, which creates an urgent need for cooperation between international organisations, governments and companies.

Sean Kanuck, director of future conflict and cyber security at the International Institute for Strategic Studies and formerly the first US national intelligence officer for cyber issues, predicted a period of intense use of sanctions as a diplomatic tool against entities that undertake offensive actions in the cyber space.

The growing likelihood of ever-escalating conflicts in the cyber space makes it necessary to address standards of operation in the digital space, the report said.

Source: https://www.computerweekly.com/news/252443085/Cyber-attack-warnings-highlight-need-to-be-prepared