đŸ•šī¸
Ghoulsec Vault
Website Toolbase
  • Welcome !
  • đŸ—ƒī¸Exam Prep Notes
    • 📔CEH V13 (Master Edition)
      • Module 02: Footprinting and Reconnaissance
      • Module 03: Scanning Networks
      • Module 04: Enumeration
      • Module 05: Vulnerability Analysis
      • Module 06: System Hacking
      • Module 07: Malware Threats
      • Module 08: Sniffing
      • Module 09 : Social Engineering
      • Module 10: Denial of Service
      • Module 11: Session Hijacking
      • Module 12: Evading IDS, Firewalls , Honeypots
      • Module 13: Hacking web servers
      • Module 14: Hacking Web Applications
      • Module 15:SQL Injection
      • Module 16: Wireless Attacks
      • Module 17: Hacking Mobile Platforms
      • Module 18: IoT & OT Hacking
      • Module 19 : Cloud Computing
      • Module 20 : Cryptography
    • 📘ISC2 -Certified in Cyber Security
      • Chapter -1 Security Principals
      • Chapter -2 Disaster Recovery,Business Continuity & Incident Response
      • Chapter -3 Access Control Concepts
      • Chapter -4 Network Security
      • Chapter -5 Security Operations
    • 📗AZ - 900
      • Basics of Cloud Computing
        • Economic of Scale
        • CapEx & OpEx
        • Consumption based model
        • Cloud Deployment Models
        • Shared Responsibility Model
        • Data centers, Regions & Availability Zones & Geographyies in Azure
        • Resources & Resource Group
        • Azure Access Tools
        • Azure Service Lifecycle
        • Azure SLAs
        • Azure Cloud Adoption Framework
      • Azure Services
        • Azure Computing Services
        • Azure Networking Services
        • Azure Storage Services
        • Azure Database Service
        • Azure Marketplace
        • Big Data & Azure
        • Azure Serverless Computing
        • IoT & Azure
        • Other Azure Services
        • Azure Resource Locks & Resource Tags
        • Azure Security Services
          • Security Groups
          • Routing Tables
          • Azure Firewall
          • Azure DDOS Protection
          • Azure Identity & Access Management (Azure AD)
          • Azure RBAC
          • Azure Security Center
          • Azure Key Vault
      • Azure policies & Blueprints
      • Azure Cost Management
      • Azure Core Tenents
    • 📗SC - 900
      • Zero Trust Model
      • Shared Responsibility Model
      • Defense in Depth
      • Security Controls
      • CIA Triad
      • Encryption / Hashing / Signing
      • Microsoft Security & Compliance Principals
      • Common Identity Attacks
      • Microsoft Identity as Primary Security Perimeter
      • Modern Authentication with Azure AD
      • Azure AD Identity Types
      • Azure External Identities
      • Hybrid Identities
      • Authentication Methods & Password Protection
      • Conditional Access
      • Azure AD Roles & RBAC
      • Governance in Azure AD
      • Azure Identity Protection
      • Azure NSG, DDOS, Firewall
      • Azure Bastion & Keyvault
      • Azure Encryption
      • Azure CSPM
      • Azure Security Center
      • Azure Defender
        • Microsoft O365 Defender
        • Azure Sentinel
      • Microsoft Cloud App Security
      • Microsoft Intune
      • Microsoft Complaince Manager
      • Information Protection
      • Microsoft insider risk capabilities
    • 📔AWS Cloud Practitioner
      • Cloud Deployment
        • AWS Global Infrastructure
      • AWS Cloud Essentials
        • AWS SNS
          • AWS SQS
      • AWS Compute & Container Services
        • AWS EC2
          • Elasticity & Agility
        • AWS Serverless Computing
        • AWS Container Services
        • AWS Beanstalk
        • AWS Load Balancing
      • AWS Storage & Databases
      • AWS Networking
      • AWS Security
        • AWS Organisation
        • AWS Compliance
        • AWS DDOS
        • Additional Security Services
      • AWS Monitoring & Analytics
        • Other Analytics Services
      • AWS Pricing & Support
      • AWS Migration & Innovation
        • Migration Statergies
        • AWS Snow Family
        • AWS Transfer Family
        • Innovation on AWS
      • AWS Well Architecture Framework
      • Other AWS Services
  • 🧑‍đŸ’ģCyber Security Base
    • Ghoul's Den
    • Cheatsheets
    • SSL Handbook
      • Installing OpenSSL
      • Encrypting Web Servers using SSL
      • Compiling .pfx file using openssl
      • De-compiling .pfx file
      • Zimbra Mail Server SSL renew
      • SSL Certificates
    • â˜ī¸Cloud Security
      • Amazon Web Services (AWS)
        • AWS - IAM & Policy Management
        • AWS Organization
        • AWS - Roles (Assume Role)
        • AWS - EC2 (Elastic Computing)
        • AWS - VPC
        • AWS - Bastion Host
        • AWS Application Load Balancer
        • AWS Auto Scaling Group
        • AWS WAF
        • AWS - VPC Peering
        • AWS Transit Gateway
        • AWS NAT Gateway
        • AWS - Route 53
        • AWS Certificates Manager
        • AWS Network Load Balancer (NLB)
    • đŸ•¸ī¸Network Security
      • Firewalls
        • Sophos - Security Suite
          • Sophos Firewalls
      • VPN
        • VPNs
          • Wireguard Tunneling
          • StrongSwan Gateway VPN Server on Ubuntu
      • IP Address and IP Subnetting
      • Basic to Advance Networking Materials
    • 🧑‍đŸ’ģVAPT
      • Sample Report Templete
      • OWASP
        • OWASP Web Application Security & WebGoat
      • Web Application testing checklist
        • Client Side Penetration Testing
          • Clickjacking
          • Websockets
          • CSRF
          • XSS Filters
        • Server Side Penetration Testing
        • Advanced Web Application Pentesting
          • OAuth
          • JWT Injection
          • Server Side Templete Injection
    • â˜„ī¸GRC
      • ISMS AUDIT PLAN
      • CIS Compliance Windows
    • 🔍Forensic
      • Forensic Tools
      • EMAIL Forensics
      • File header forensics
    • Trend Micro
      • Deep Security
        • Installation & Prerequisite
          • Microsoft SQL server 2022
          • Postgres 16 Core
          • Installation
    • Practical Malware Analysis & API Threats
  • đŸ•šī¸Server are fun !
    • Securing Linux
      • Securing SSH
      • Sudoers
      • UFW (Uncomplicated FW)
      • Unattended Upgrades
      • Password Complexiety
      • Auditing Linux
      • Samba - SMB
      • NFS - Network File System
      • FTP
      • AIDE
    • Servers
      • Linux Sever
        • OpenLDAP Server Setup
          • ldap-account-manager
          • Securing OpenLDAP
          • OpenLDAP client setup
          • Checking LDAP setup
        • Users & Groups
        • Password Policy
        • File & File Permissions
        • Automatic Updates
        • Adding GUI to Ubuntu Server
        • DNS Server
        • NTP Server
      • Windows Sever
        • Deep dive into Windows Server
        • Windows server Installation
        • Disable IPv6
        • Disable Server Manager on startup
        • Promoting OS to AD server
        • AD DNS
        • AD DHCP
        • AD Activation
        • AD Users & Groups
        • AD FileServer
        • Adding Users into AD
        • Bulk Importing users into Domain
        • Windows AD Resources
        • Group Policy Editor
          • Creating First Policy
          • Restricting CMD & Powershell
          • Password Policies
          • Audit Policies
          • Log Retention Policy
          • Windows Defender Firewall
          • Patch Management & Window Updates
        • Setting Up local accounts for windows hosts
    • Hacking Systems
      • Linux Systems
        • Battery Charge Threshold (Linux)
        • Metasploitable OS Complete Solved
      • Tricking Windows
        • Bitlocker & Time based policy bypass
    • Monitoring HeartBeats
    • RAID Technology
    • Managing Partitions
      • Parted
      • Fdisk
      • LVM
        • Re-claiming unused disk space
        • Adding physical volume into VG
        • Removing physical volume from VG
      • SWAP Memory
    • Virtualization
      • Proxmox
        • Proxmox VE Installation
        • Default Storage Expansion
        • Proxmox Network Link Aggegration
        • NAT Interface Configuration
        • Importing disk into proxmox
        • Creating an LVM Thin Pool
        • Proxmox IAM
  • đŸ™â€â™‚ī¸Dan the AI Expert
    • Ollama Ollama ...
      • Ollama
        • Ollama models
        • Open Web UI & Ollama Integration
    • Microsoft Presentation using AI
  • đŸ›Ąī¸Security Operations
    • Wazuh - Open Source XDR. Open Source SIEM.
      • Components
      • Deployment Methods
      • Installation
        • Single Node Installation
          • Wazuh Indexer Installation
          • Wazuh Server Installation
          • Wazuh Dashboard Installation
          • Single node deployment with Script
        • Managing multiple Wazuh clusters with Cross-Cluster Search
          • Cluster Configuration
          • Set up Cross-Cluster Search
        • Upgrading Wazuh Central Components
        • Uninstalling Wazuh
      • Custom Decoder & Rules
        • Pfsense Decoders
        • Pfsense Rules
      • Security Operations Center Tools (Wazuh)
    • IRIS-DFIR / Case Management
    • Graylog & ElasticSearch
    • Zabbix NMS
      • Zabbix Installation
      • Zabbix Database Management
        • Zabbix Database backup
        • Zabbix Database restore
        • MySQL Replication (M-M)
        • MySQL Replication (M-W)
      • Zabbix Geomap Setup
      • Adding Hosts into Zabbix
      • Zabbix Hostgroups
      • Auto Registration Hosts Zabbix
      • Zabbix Globalscrips
      • Zabbix HA Cluster
      • Zabbix Proxy
      • Zabbix Notifications
      • Zabbix Maintainance
      • Zabbix User Management & Authentication
      • Zabbix Housekeeping
      • Scheduled Reporting
      • Service Monitoring using Zabbix
  • Git the master!!
    • Configuring git
    • Git Branches
    • Undoing changes
    • Git - Cheat sheet
    • My first git...
Powered by GitBook
On this page
  • Raid based storage architecture :
  • RAID 0
  • RAID 1
  • RAID 5
  • RAID 6
  • RAID 10
  • RAID 50
  • RAID 60
  • Summary
  • Think & Thin Storage types :
  1. Server are fun !

RAID Technology

PreviousMonitoring HeartBeatsNextManaging Partitions

Last updated 2 months ago

RAID (Redundant Array of Independent Disks) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve performance, redundancy, or both. RAID can be implemented via software (OS-level) or hardware (dedicated RAID controllers).

We will learn all the indrustry standard RAID models used & take some practical disk level scenarios to understand the raid technology better and apply the same in your own storage devices & NAS.


Raid based storage architecture :

RAID-based storage architecture in a Data Flow Diagram (DFD) format, illustrating the logical structure of data storage. At the core, RAID provides redundancy and performance enhancements by managing multiple physical disks as a single unit. Within this RAID structure, a Storage Pool is created, grouping multiple disks to form a large, manageable storage space. From this pool, Data Volumes are allocated to define logical storage units that can be further divided into Storage LUNs (Logical Unit Numbers), which act as virtual storage partitions for efficient data management. These LUNs are then mapped and made accessible as Network Mapped Storage, allowing external systems to connect and utilize the allocated space via network protocols like NFS, SMB, or iSCSI. This structured approach is commonly used in enterprise storage solutions, SAN (Storage Area Network) environments, and NAS (Network Attached Storage) setups, ensuring optimized storage allocation, redundancy, and performance scalability.


RAID 0

Raid 0 also termed as Striping method where we apply raid 0 model on two or more independent disks of same make, model & size. RAID 0 is used when the organization wants to ensure the maximum usable space to store data & thinking nothing about data redundancy or backup.

Use Case : RAID 0

Use Case: High-Speed Data Processing

  • Scenario: Used in gaming PCs, video editing workstations, and high-performance computing (HPC) where speed is more important than redundancy.

  • Example: A video production studio that needs ultra-fast read/write speeds for editing 8K raw footage without worrying about data redundancy

Here we undestand it better with the real life usecase based example :

### WE HAVE TWO HDDs for 1TB each and we are applying RAID 0 on it : 
1TB - STORAGE A & 1TB - STORAGE B = 1.89TB of total usable space

RAID 1

RAID 1, also termed as Mirroring method, is applied on two or more independent disks of the same make, model & size. RAID 1 is used when the organization wants to ensure maximum data redundancy and fault tolerance, as it creates an exact copy of data on all disks, sacrificing usable storage space for reliability.

Use Case : RAID 1

Use Case: Critical Data Storage with Fault Tolerance

  • Scenario: Used in database servers, financial institutions, and medical records storage where data integrity is crucial.

  • Example: A hospital's patient record system that requires data redundancy to prevent loss of critical medical data

We can understand it better with the below example :

1TB -  STORAGE A
1TB -  STORAGE B
------------------
0.89 TB of usable space
# HERE THE DATA IS CLONED IN SECOND DISK IN REALTIME ENSURING 1 REDUNDANT COPY

RAID 5

RAID 5, also termed as Striping with Parity method, is applied on three or more independent disks of the same make, model & size. RAID 5 is used when the organization wants to ensure both data redundancy and maximum usable space, as it distributes data and parity information across all disks, allowing recovery from a single disk failure while maintaining efficient storage utilization.

RAID 5 ensure (n-1) : n = number of disks & MINIMUM 3 disks & MAXIMUM 16 disks are supported by RAID 5

Use Case - RAID 5

Use Case: Small to Medium-Sized Business (SMB) File Storage & Web Servers

  • Scenario: Used in file servers, web hosting, and cloud storage where data redundancy and efficient storage are needed.

  • Example: A small IT company that hosts client websites on a RAID 5 setup to balance cost, performance, and fault tolerance

We can understand it better with the below example :

1TB - STORAGE A  & 1TB - STORAGE B & 1TB - STORAGE C
APPLYING RAID 5 on the storage device : (3-1) = 2
which means 1TB+1TB = 1.89TB of storage & 1TB is stored as an parity disk

From RAID level 5 we can use an extra 1TB of 4th HDD as the Global Hot Spare to ensure if any of the disk from the RAID goes unhealthy or becomes unusable the data still remain secure on the hot spare disk to ensure the extra bit of redundancy of the data.

1TB -  STORAGE A,B,C Each
1TB -  STORAGE D (HOT SPARE)  

RAID 6

RAID 6, also termed as Striping with Double Parity method, is applied on four or more independent disks of the same make, model & size. RAID 6 is used when the organization wants to ensure high data redundancy and fault tolerance, as it distributes data along with two parity blocks across all disks, allowing recovery from up to two disk failures while maintaining efficient storage utilization.

RAID 5 ensure (n-2) : n = number of disks & MINIMUM 4 disks & MAXIMUM 16 disks are supported by RAID 6

Use Case - RAID 6

Use Case: Enterprise-Level Data Storage with High Fault Tolerance

  • Scenario: Used in large data centers, cloud service providers (CSPs), and enterprise backup solutions where recovery from multiple failures is essential.

  • Example: A cloud backup service that stores customer data on RAID 6 to ensure business continuity even if two drives fail simultaneously

We can understand it better with the following example :

1TB -  STORAGE A & 1TB - STORAGE B
1TB -  STORAGE C & 1TB - STORAGE D
APPLYING RAID 6 : (4-2) = 2
= 1TB+1TB usable space & 2TB of space as parity disks 

From RAID level 6 we can use an extra 1TB of 5th HDD as the Global Hot Spare to ensure if any of the disk from the RAID goes unhealthy or becomes unusable the data still remain secure on the hot spare disk to ensure the extra bit of redundancy of the data.

1TB -  STORAGE A,B,C,D Each
1TB -  STORAGE E (HOT SPARE)  

Here we have learnt all the basic raid levels used in the industry now we will move on to build the RAID 10, 50 , 60.

RAID 10

RAID 10, also termed as Mirroring with Striping method, is applied on four or more independent disks of the same make, model & size. RAID 10 is used when the organization wants to ensure both high data redundancy and improved performance, as it combines disk mirroring (RAID 1) and striping (RAID 0), ensuring fault tolerance while enhancing read and write speeds.

RAID 1 + RAID 0 = RAID 10

Use Case - RAID 10

RAID 10 (Mirroring + Striping – High Performance & Redundancy)

Use Case: High-Performance Databases & Transaction Systems

  • Scenario: Used in banking, stock trading platforms, and high-frequency transaction databases where both speed and redundancy are critical.

  • Example: A banking institution using RAID 10 for its real-time transaction database, ensuring both high-speed read/write operations and data protection.

We can understand it better with the example below :

LETS SAY WE HAVE 4 disks of 1TB each
# We will apply RAID 1 on disk A & disk B
1TB storage A -- RAID 1 -- 1TB storage B = 0.89TB usable space & 1 redundancy
1TB storage C -- RAID 1 -- 1TB storage D = 0.89TB usable space & 1 redundancy
# APPLYING RAID 0 NOW 
1TB - STORAGE AB -- RAID 0 -- 1TB - STORAGE CD = 1.89TB usable space & 1 REDUNDANCY

Since RAID 10 (RAID 1 + RAID 0) is used, it can tolerate the failure of one disk per mirrored pair (i.e., one disk from AB and one from CD) without data loss. However, if both disks in a mirrored pair fail, all data is lost.

We can also take use of 1 OR 'n' GLOBAL SPARE to get 2 data redundancy in case of disk failure.


RAID 50

RAID 50, also termed as Striping with Distributed Parity method, is applied on six or more independent disks of the same make, model & size. RAID 50 is used when the organization wants to ensure a balance between performance, redundancy, and storage efficiency, as it combines RAID 5 (striping with parity) and RAID 0 (striping). This setup improves read and write speeds while allowing recovery from a single disk failure in each RAID 5 array.

RAID 5 + RAID 0 = RAID 50

Use Case - RAID 50

RAID 50 (RAID 5 + RAID 0 – Improved Performance & Redundancy)

Use Case: High-Performance File Servers & Virtualization Platforms

  • Scenario: Used in large-scale storage systems, high-end NAS/SAN solutions, and virtualization environments needing speed and fault tolerance.

  • Example: A large media company running a virtualized server environment for editing and rendering, balancing performance and protection.

Lets understand it with an example :

LETS SAY WE HAVE 6 disk of 1TB each {STORAGE - ABCDEF}
# APPLYING RAID 5 on ABC
(n-1) --> (3-1) = 2 (ROUGHLY 1.89TB) usable space & 1 redundancy
# APPLYING RAID 5 on DEF
(n-1) --> (3-1) = 2 (ROUGHLY 1.89TB) usable space & 1 redundancy
## APPLYING RAID 0 on both RAID 5 DISK ARRAYS
1.89TB + 1.89 TB = 4.0TB + 1 redundancy

Since RAID 50 (RAID 5 + RAID 0) is used, the system can tolerate one disk failure per RAID 5 array (i.e., one failure in ABC and one in DEF without data loss). However, if two disks fail in the same RAID 5 array, data is lost.

We can also take use of 1 OR 'n' GLOBAL SPARE to get 2 or more data redundancy in case of disk failure.


RAID 60

RAID 60, also termed as Striping with Dual Distributed Parity method, is applied on eight or more independent disks of the same make, model & size. RAID 60 is used when the organization wants to ensure high fault tolerance, improved performance, and efficient storage utilization, as it combines RAID 6 (striping with double parity) and RAID 0 (striping). This setup allows for two disk failures per RAID 6 array while enhancing read and write speeds through striping.

RAID 6 + RAID 0 = RAID 60

Use Case - RAID 60

RAID 60 (RAID 6 + RAID 0 – Maximum Redundancy for Large-Scale Deployments)

Use Case: Mission-Critical Enterprise & Cloud Infrastructure

  • Scenario: Used in data warehousing, government agencies, and research institutions where massive data integrity and uptime are required.

  • Example: A national research center storing petabytes of climate modeling data that cannot afford more than two disk failures per array.

Lets understand it better with example :

LETS SAY WE HAVE 8 disks - 1TB each - {STORAGE ABCDEFGH}
# APPLYING RAID 6 on ABCD
(n-2) --> (4-2) = 2TB usable space + 2 redundancy
# APPLYING RAID 6 on EFGH
(n-2) --> (4-2) = 2TB usable space + 2 redundancy
## NOW APPLYING RAID 0 on RAID 6 DISK ARRAYS
2TB + 2TB = 4TB usable space + 2 redundancy 

Since RAID 60 (RAID 6 + RAID 0) is used, the system can tolerate up to two disk failures per RAID 6 array (i.e., two failures in ABCD and two in EFGH without data loss). However, if three or more disks fail within the same RAID 6 array, data is lost.

We can also take use of 1 OR 'n' GLOBAL SPARE to get 3 or more data redundancy in case of disk failure.


Summary

  • RAID 0 → Gaming, Video Editing (Speed, No Redundancy)

  • RAID 1 → Medical Records, Banking (Data Safety, No Speed Gain)

  • RAID 5 → Web Hosting, SMB Servers (Good Balance, 1 Disk Failure Tolerance)

  • RAID 6 → Cloud Storage, Large Backups (Better Fault Tolerance, 2 Disk Failures)

  • RAID 10 → Databases, Stock Trading (Fast + Redundant)

  • RAID 50 → Virtualization, Large File Servers (Better than RAID 5 in Speed & Redundancy)

  • RAID 60 → Enterprise Cloud, Data Centers (Maximum Safety & Uptime)


Think & Thin Storage types :

Thick provisioning (Think Storage) refers to the process of pre-allocating the entire storage space upfront, ensuring that the designated capacity is always available for use. This approach guarantees performance and prevents fragmentation, making it ideal for critical applications like databases and financial systems that require consistent storage availability. However, thick provisioning can lead to inefficient storage utilization, as unused space remains reserved even if it is not actively in use.

On the other hand, Thin provisioning (Thin Storage) dynamically allocates storage on demand, assigning space only when data is written. This method optimizes storage utilization, allowing multiple virtual machines or applications to share a common storage pool efficiently. While thin provisioning enhances flexibility and cost efficiency, it comes with the risk of over-provisioning, where the allocated storage exceeds the actual physical capacity, potentially leading to performance degradation if not managed properly. Thin storage is widely used in cloud environments and virtualized infrastructures where scalability is a key requirement.


đŸ•šī¸