Attacks Involving Data Tampering are Difficult to Identify

Attacks Involving Data Tampering are Difficult to Identify

A data breach occurs at a manufacturer of pharmaceuticals, but no information is stolen, and no ransomware is used. Instead, the attacker merely alters some trial data, which ultimately causes the company to release the incorrect medicine.

For now, it's only a speculative possibility. Of course, ransomware and the stealing of sensitive data remain two of the biggest and most pressing security concerns, but at least solutions exist.

Data tampering is a different kind of hazard, and depending on the circumstance, it can even be more serious for some firms. However, experts told Protocol that in light of the fact that so few of these attacks have happened and came to light, it is not something that many firms are concerned about.

Data manipulation is seen as being a growing threat in the years to come, according to Will Ackerly.

According to Lou Steinberg, CISOs in a variety of sectors, including financial services and pharmaceuticals, are growing more concerned about the possibility of attacks on "data integrity" or "data manipulation."

Steinberg, who is currently the pioneer of the cybersecurity research facility CTM Insights, gave the example of a threat actor corrupting a portion of a publicly traded company's data and then making this information public, preventing it from being capable of closing its books at the quarter's end.

For years, there have been warnings about such attacks. And the fact that so few have garnered media attention implies they might be more difficult to carry off than they might appear.

However, experts stated that the technology and understanding needed to address concerns of data manipulation are still not where they should be.

The follow-up question is whether it would actually be feasible to recover the original, unharmed version of the data given the pace at which modern data is gathered and overwritten, he said. According to Steinberg, "a rollback can generate more harm than the attack" for often changing data.

According to Heidi Shey, a principal analyst at Forrester, most firms are also obsessed with other data security concerns, such as safeguarding the confidentiality of their data.

Adversarial ML is a sort of data manipulation threat that has attracted somewhat more attention. In this attack, the attacker tries to trick an ML model by feeding it fake data during its training phase.

The ML model won't function properly, despite the fact that the reasons for doing this can differ. One notorious example of adversarial machine learning is the instance of Microsoft's short-lived Twitter chatbot Tay, however there are many recorded instances of successful data-poisoning assaults on ML models, both by threat actors and scientists.

However, those attacks typically don't lead to a real data breach. Instead, the attackers were able to externally affect the ML models. However, Lisa O'Connor cautioned that this does not mean that the data repositories used to generate important ML models may not constitute a prime target for a determined hacker.

Adversarial ML risks are a significant worry, according to O'Connor, given the increasing reliance of society on algorithms. She pointed to initiatives like the MITRE ATLAS initiative that seek to defend against attacks to ML models and said, "The stakes are really high for maintaining that ecosystem."

More Trending StoriesĀ 

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net