Archive
Meltdown and Spectre vulnerability, all about, and references for patching Windows OS & SQL Server
Meltdown and Spectre are hardware vulnerabilities in modern computers which leak passwords and sensitive data by affecting nearly all modern operating systems (Windows, Linux, etc) and processors (includes Intel, AMD, ARM, etc). These hardware vulnerabilities allow programs to steal data which is currently processed on the computer, data like passwords, personal photos, emails, instant messages and even business-critical documents.
–> On 4th January 2018 three vulnerabilities affecting many modern processors were publicly disclosed by Google’s Project Zero:
1. CVE-2017-5715 (Spectre, branch target injection) – Systems with microprocessors utilizing speculative execution and indirect branch prediction may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis.
2. CVE-2017-5753 (Spectre, bounds check bypass) – Systems with microprocessors utilizing speculative execution and branch prediction may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis.
3. CVE-2017-5754 (Meltdown, rogue data cache load) – Systems with microprocessors utilizing speculative execution and indirect branch prediction may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis of the data cache.
Tech giants such as Apple, Alphabet, and Intel identified these vulnerabilities. Apple kept mum for a while and Intel decided not to inform the US-CERT (United States Computer Emergency Readiness Team), upon learning about Meltdown and Spectre as hackers had not taken advantage of the flaws. It was only Google who disclosed the information to Intel, AMD and ARM Holdings back in June of 2017.
What’s the vulnerability all about?
Most of the chip manufacturers around the world add some flaws to their hardware to help them running faster. The two main techniques used to speed up them are Caching and Speculative Execution. If exploited, these could give hackers and malicious/rouge programs access to the data which was considered totally protected. Both of these techniques are dubbed as Meltdown & Spectre respectively and are explained below.
What is Meltdown?
The vulnerability basically melts security boundaries which are normally enforced by the hardware. Meltdown breaks the mechanism that keeps applications from accessing arbitrary system memory. Consequently, applications can access system memory or cache.
Meltdown is a novel attack that allows overcoming memory isolation completely by providing a simple way for any user process to read the entire kernel memory of the machine it executes on, including all physical memory mapped in the kernel region. Meltdown does not exploit any software vulnerability, i.e., it works on all major operating systems. Instead, Meltdown exploits side-channel information available on most modern processors, e.g., modern Intel micro architectures since 2010 and potentially on other CPUs of other vendors.
It is a software based side-channel attack exploiting out-of-order execution on modern processors to read arbitrary kernel- and physical-memory locations from an unprivileged user space program. Without requiring any software vulnerability and independent of the operating system, Meltdown enables an adversary to read sensitive data of other processes or virtual machines in the cloud with up to 503 KB/s, affecting millions of devices.
What is Spectre?
This vulnerability is based on the root cause, speculative execution. As it is not easy to fix, it will haunt us for quite some time. Spectre tricks other applications into accessing arbitrary locations in their memory.
Speculative execution is a technique used by high speed processors in order to increase performance by guessing likely future execution paths and prematurely executing the instructions in them. For example when the program’s control flow depends on an uncached value located in the physical memory, it may take several hundred clock cycles before the value becomes known. Rather than wasting these cycles by idling, the processor guesses the direction of control flow, saves a checkpoint of its register state, and proceeds to speculatively execute the program on the guessed path. When the value eventually arrives from memory the processor checks the correctness of its initial guess. If the guess was wrong, the processor discards the (incorrect) speculative execution by reverting the register state back to the stored checkpoint, resulting in performance comparable to idling. In case the guess was correct, however, the speculative execution results are committed, yielding a significant performance gain.
Guidance for Windows OS: [Server link], [Client link]
Guidance for SQL Server: [link]
Guidance for Azure: [link]
Guidance for Oracle: [link]
Guidance for AWS: [link]
Meltdown demos (video):
References:
– Google Project Zero
– meltdownattack.com (Meltdown PDF)
– spectreattack.com (Spectre PDF)
– Good read on Meltdown and Spectre (csoonline.com)
– Google Retpoline (Jump Over ASLR)
– Microsoft Cloud blog
– stratechery.com
– blog.bitnami.com
What is Lambda Architecture? and what Azure offers with its new Cosmos DB?
Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch processing and stream processing methods, and minimizing the latency involved in querying big data.
It is a Generic, Scalable, and Fault-tolerant data processing architecture to address batch and speed latency scenarios with big data and map-reduce.
–> The system consists of three layers: Batch Layer, Speed Layer & Service Layer
1. All data is pushed into both the Batch layer and Speed layer.
2. The Batch layer has a master dataset (immutable, append-only set of raw data) and pre-computes the batch views.
3. The Serving layer has Batch views for fast queries.
4. The Speed Layer compensates for processing time (to the serving layer) and deals with recent data only.
5. All queries can be answered by merging results from Batch views and Real-time views or pinging them individually.
Lambda Architecture with Azure:
Azure offers you a combination of following technologies to accelerate real-time big data analytics:
1. Azure Cosmos DB, a globally distributed and multi-model database service.
2. Apache Spark for Azure HDInsight, a processing framework that runs large-scale data analytics applications.
3. Azure Cosmos DB change feed, which streams new data to the batch layer for HDInsight to process.
4. The Spark to Azure Cosmos DB Connector
How Azure simplifies the Lambda Architecture:
1. All data is pushed into Azure Cosmos DB for processing.
2. The Batch layer has a master dataset (immutable, append-only set of raw data) stored in Azure Cosmos DB. Using HDI Spark, you can pre-compute your aggregations to be stored in your computed Batch Views.
3. The Serving layer is an Azure Cosmos DB database with collections for the master dataset and computed Batch View for fast queries.
4. The Speed layer compensates for processing time (to the serving layer) and deals with recent data only. It utilizes HDI Spark to read the Azure Cosmos DB change feed. This enables you to persist your data as well as to query and process it concurrently.
5. All queries can be answered by merging results from batch views and real-time views, or pinging them individually.
–> For complete details check here in Microsoft Docs: Azure Cosmos DB: Implement a lambda architecture on the Azure platform
Create a new Azure SQL Database (on PaaS) step by step
Azure SQL Database is a Microsoft’s cloud based service on Microsoft on premise version of SQL Server relational database. It is a fully-managed relational cloud database service built for developers.
–> Today we will see how you can create a new SQL Database on Microsoft Azure:
1. Please make sure you have a Azure Account and Subscription.
2. Go to Microsoft Azure portal:
– In the menu, click New
– Select the Databases in the Azure Marketplace blade
– Click SQL Database
3. In the SQL Database blade enter the following settings:
– Name: MyFirstAzureSQLDB
– Subscription: Select your Azure Subscription
– Resource Group: Select the Resource Group, or create new
– Select source: We will choose here “Sample (AdventureWorksLT)”
– Server: Create a new logical server with following settings:
– Server name: Provide a globally unique name
– Server admin login: Provide a valid user name of your choice
– Password: Provide a valid password
– Location: Select a valid location
– Allow azure services to access server: Select the check box
– Want to use SQL elastic pool? Select “Not now”
– Pricing tier: As you click on this option you will be taken to the Configure Performance blade, select Basic for now, but you can choose higher tiers as per your workloads and needs.
– Collation: Leave it as default, SQL_Latin1_General_CP1_CI_AS
– Pin to dashboard: Select the check box
4. Finally click the Create button on the main blade. Provisioning a new SQL Database takes few minutes and you can see the deployment progress at the Top-Right corner of your portal. And once the deployment is done you will see following message at the same place:
5. Now as you had selected to pin the new Database, so you should be able to see it in your dashboard like this:
6. Click on the shortcut and you will be taken to its Settings and Properties, where you can check resources, various options, tweak settings and do some admin related stuff.
7. The portal also provides a basic Query Editor to execute SQL Queries, check data in tables, and explore the database, check some DB objects, etc. After selecting the Query Editor you can click on Login button at the top. Provide the User Name and Password you applied while creating the Database.
8. Try explore the Sample Database, it gives limited feature to check Tables, Views and SPs. You can try writing a SQL Query and Run against the tables/views, or execute the SPs. The results will be shown in pane below.
9. You can also connect to this Azure SQL Database from SSMS installed in you PC or a remote server, which I’ll show in my next [blog post].
–> Terms used above:
1. Resource Group: is a collection of resources that share the same life cycle, permissions, and policies.
2. Server: or Logical Server acts as a central administrative point for multiple databases, including elastic pools logins, firewall rules, auditing rules, threat detection policies, and failover groups.
3. DTU (or Database Transaction Units): Microsoft guarantees a certain level of resources for that database (independent of any other database in the Azure cloud) and providing a predictable level of performance. This amount of resources is calculated as a number of Database Transaction Units or DTUs, and is a blended measure of CPU, memory, I/O (data and transaction log I/O).
4. Elastic Pools: provide a simple and cost effective solution for managing the performance of multiple databases within a fixed budget. An elastic pool provides compute (eDTUs) and storage resources that are shared between all the databases it contains. Databases within a pool only use the resources they need, when they need them, within configurable limits. The price of a pool is based only on the amount of resources configured and is independent of the number of databases it contains.
Get started with Microsoft Azure, create your Account and free Subscription
Microsoft Azure is a comprehensive set of cloud services that developers and IT professionals use to build, deploy and manage applications through Microsoft’s global network of datacenters.
As of today (i.e. February, 2018) Azure datacenters are spread in 42 regions.
–> Create a Microsoft Account: Sign up here: https://signup.live.com
–> Microsoft Azure Subscription: Sign up here: https://azure.microsoft.com/en-us/free/
What you get is a free subscription worth $200 for 12 months, and few services (~25) free for always.
Note:
– To sign up you need to provide a valid phone number, a credit or debit card, and a Microsoft Account username.
– For the free Azure subscription, you will need to provide a valid credit card number for identity verification.
– Don’t worry, you will not be charged for any Azure services until you choose to upgrade to a pay-as-you-go subscription.
–> Azure Portal: After going through the above steps, now you can Login to the portal https://portal.azure.com, which looks like below:
– The Left side Navigation Bar shows you various Services and the big “+” button at the top to create a new Azure Service.
– The center & right side is the area of Dashboard where you can pin your most used Resources as Tiles.
An Introduction to Cloud Computing …aligned with Microsoft Azure
What is Cloud Computing?
Cloud Computing is the delivery of computing services like servers, storage, databases, networking, software, analytics and more-over the Internet (“the cloud”). Here the computing resources which contains various servers, applications, data and other resources are integrated and provide a service over the Internet to Individuals and Organizations. Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you are billed for water or electricity at home.
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. – NIST
The two prominent Cloud Computing providers in the market currently are:
– Microsoft Azure and
– Amazon’s AWS.
Uses of Cloud Computing:
1. On-demand and Self-Service, without any human intervention or manual work.
2. Create as many Virtual Machines (VMs) of your choice of Operating System (OS) quickly without worrying about hardware and office/lab space.
3. Instantaneously Scale up and Scale down the VMs and other services
4. Create new apps and services quickly
5. Resource pooling and Elasticity.
6. Host websites, portals and blogs
7. Store, back up and recover data
8. Stream audio and video
9. Analyse data for patterns and make predictions
Benefits of Cloud Computing:
1. Cost: eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters
2. Global Scale: Quickly Scale-Up & Scale-Out as in when you need more resource, and Scale-Down when not in need, and pay as you use.
3. Reliability: Provision of Data backup, Business Continuity and Disaster Recovery (BCDR), by mirrored data at multiple redundant sites on the cloud provider’s network.
4. Speed and Performance: Majority of computing resources can be provisioned in minutes, with state-of-art and latest-gen high-end hardware.
5. Productivity: Rather than involving in IT management chores, the IT teams can spend time on important business goals.
Types of Cloud Computing:
As per the NIST (National Institute of Standards and Technology) the Cloud Computing service provider should have following 3 service models for its customers:
1. Infrastructure as a Service (IaaS): The consumer can provision Processing, Storage, Networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include Operating Systems and Applications.
– The consumer does not manage or control the underlying cloud infrastructure.
– But has control over Operating Systems, Storage, deployed Applications, and possibly limited control of select networking components (e.g., host firewalls).
– Example: Windows and Linux VMs, Blob Storage, SQL Server with Windows/Linux VM, Virtual Network, etc.
2. Platform as a Service (PaaS): The consumer can deploy onto the cloud infrastructure consumer-created or acquired applications created using Programming Languages and Tools supported by the provider.
– The consumer does not manage or control the underlying cloud infrastructure including Network, Servers, Operating Systems, or Storage.
– But has control over the deployed Applications and possibly application hosting environment configurations.
– Example: Azure SQL Database, DocumentDB, HDInsight, Data Factory, etc.
3. Software as a Service (SaaS): The consumer can use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email).
– The consumer does not manage or control the underlying cloud infrastructure including Network, Servers, Operating Systems, Storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
– Example: Microsoft Office 365, WordPress, Joomla, Django, etc.
Deployment Models:
1. Public cloud: The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
– Example: Microsoft Azure.
2. Private cloud: The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.
– Example: Microsoft Azure Stack.
3. Hybrid cloud: This combines Public and Private clouds, bound together by technology that allows data and applications to be shared between them, providing businesses greater flexibility and more deployment options.
– Example: Cloud Bursting for load-balancing between clouds.