A Data Protection Strategy to Enable Big Data Initiatives
Business insights from Big Data analytics promise major benefits to enterprises. Architectures like Hadoop can aggregate structured, semi-structured and unstructured data, allow parallel computations on large datasets, and continuously feed that store to enable data scientists to see patterns and trends. Enterprises need to enable secure access to data for analytics, in order to extract maximum value from information.
But these initiatives also introduce big potential risks. Managing massive amounts of data increases the risk and magnitude of a potential data breach. Sensitive data can be exposed, and violate compliance and data security regulations. Aggregating data across borders can break data residency laws. So, finding a solution to secure sensitive data, yet enable analytics for meaningful insights, is necessary for any Big Data initiative.
Voltage Security delivers the data protection strategy organizations need to deploy Big Data for competitive advantage, and only Voltage delivers these key capabilities:
- Secure sensitive data entering Hadoop, then control access
Protect data from any source, of any format, before it enters Hadoop. Set policies enabling which applications and which users get access to which original data, with protection of sensitive data that maintains usable, realistic values for accurate analytics and modeling on data in its encrypted form.
- Assure global regulatory compliance
Securely capture, analyze and store data from global sources, and ensure compliance with international data security, residency and privacy regulations. Address compliance comprehensively, not system-by-system.
- Optimize performance and scalability
Integrate data security fast, with quick implementation and an efficient, low-maintenance solution that won’t degrade performance and will scale up. Leverage IT investments by integrating with the existing IT environment and extending current controls and processes into Hadoop.
Use cases include:
- U.S. government agency shares 40+ TB dataset of healthcare informatics with independent R&D institutes across the country, to analyze the data for emerging risks and patterns. All data is de-identified at field level before release in full HIPAA/HITECH compliance. Health risks down to the individual person are spotted, and with secure data re-identification by the government agency, those affected people can be communicated with, and proactively treated for improved healthcare.
- Industry regulatory body is charged with ongoing monitoring of financial services activities and brokerage patterns to determine compliance violations and identify data breach risks and financial market risks. Data is de-identified in a straight ETL move prior to entering Hadoop, enabling secure analytics in Hadoop by data scientists, with the data already protected in production. Because there is no need for constant encrypt/decrypt operations, there is virtually no impact to performance.
- Top insurance and financial services provider must extract more real-time data from customer records to increase competitive offerings and grow revenues based on more targeted promotions. De-identified customer data enables deeper analytics of buyer behavior and segmentation in Hadoop by data scientists without violating industry, state or national data privacy regulations.
- Technical Brief: Protecting Enterprise Data in Hadoop
- Technical Brief: Data-Centric Security vs. Database-Level Security
- Data Sheet: Voltage SecureData Suite for Hadoop
- Use Case Brief: Data De-Identification
- Solution Brief: Voltage Enterprise Security for Big Data
- Solution Brief: Voltage Mainframe Security
- White Paper: Big Data, Meet Enterprise Data Security
- White Paper: Meeting Data Residency and Compliance Challenges in Global Enterprises
- Case Study: US Government Organization: Advancing Medical Research by Sharing Big Data
- Case Study: Global Investment Bank – Solving Data Residency and Privacy Compliance Challenges