Windows Phone Dev Center Changes + Credit card validation no longer required

After Build we have had many announcements regarding to the Windows Store and the Store Dev Centers. It is not the objective of this discussion to talk about the improvements on the Windows and Windows Phone Store so before go to the dev side let’s just say that now you have the opportunity to build once and deploy for both stores thanks to Universal Apps, and not only that, buy ONCE and have the app on both Operating Systems at no extra cost ^_^

Let’s go through the two major changes on the Store Dev accounts:

1) Credit card validation no longer required for the registration process

This is something that mostly students were looking for years, to be able to create their own developer account without the need to get a credit card for the account verification (remember that for Students the account is FREE thanks to the BizSpark program but requires to verify your identity with a credit card).

Also whenever 1 full year old comes to your account, in the renewal you will have same options for it + now we are enabling PayPal as a renewal or even registration payment for the Windows Store (in markets where PayPal is currently supported).

2) New feedback features: Microsoft is slowly rolling out a program whereby developers can comment on your reviews of their handiwork so you would soon be able to respond to user reviews of their apps and games. Here a funny demonstration

MicrosoftComments[1]

As developer you will receive these kind of notifications where you will be able to control what’s going on with your “open cases”

image

But not only that, is not only for debating personal opinions about the app, Windows Phone users are encouraged to report any questionable developer response via the reporting link in the “details” section of the app’s description:

clip_image002   clip_image004

As a user: remember that your feedback can make the applications you own better as in the end is what you want when you purchase a game or an app.
As a developer: remember that your users own your app because they think that is cool and they like it, they use it, don’t disappoint them and provide them the best quality, the best performance and regular updates.

3) Linking Windows Store and Windows Phone apps to create a universal Windows app

halo universal

Tired to pay twice for the same app? Now with Universal Apps, ‘get once and download for all compatible Windows devices’ customer experience, which we expect to increase both paid and free app downloads across device types.

Also, if you are integrating in-app purchases in your apps, this linked app experience extends your durables and consumables to be used in both stores using the same identifier.

4) App name reservation

Developers now can reserve names for new Windows Phone apps for up to 12 months in advance of release

5) Consolidated price tiers

We have simplified the pricing which is applicable for paid apps and in-app purchasing and expands Windows developer opportunity with the addition of US$0.99 and $1.39 price tiers to Windows Store.

6) Consistent certification policies

7) Reduced certification times: x10 faster!

There it is, we have reduced the app certification workflow time where in most cases the reduction comes to few hours vs. few days (previously).

COMING SOON:

  1. Promotional pricing
  2. Pre-submission validation checks
  3. Touch-enabled device targeting

Summary:

As you can see there are GREAT improvements and changes on both Windows and Windows Phone Store, everything pointing to the same direction, build once, deploy everywhere. You have no excuse to start deploying for Windows Phone and mark your own revenue model and success on the Windows Store!

Happy submission and – May the code be with you –

Sources:

http://www.engadget.com/2014/04/18/microsoft-app-store-developer-responses-roll-out/
http://blogs.windows.com/windows_phone/b/windowsphone/archive/2014/04/17/you-may-soon-get-a-response-to-your-windows-phone-app-review.aspx
http://blogs.windows.com/windows/b/buildingapps/archive/2014/04/14/dev-center-now-open-for-windows-phone-8-1-and-universal-windows-app-submissions.aspx

Windows Phone Dev Center Changes + Credit card validation no longer required

After Build we have had many announcements regarding to the Windows Store and the Store Dev Centers. It is not the objective of this discussion to talk about the improvements on the Windows and Windows Phone Store so before go to the dev side let’s just say that now you have the opportunity to build once and deploy for both stores thanks to Universal Apps, and not only that, buy ONCE and have the app on both Operating Systems at no extra cost ^_^

Let’s go through the two major changes on the Store Dev accounts:

1) Credit card validation no longer required for the registration process

This is something that mostly students were looking for years, to be able to create their own developer account without the need to get a credit card for the account verification (remember that for Students the account is FREE thanks to the BizSpark program but requires to verify your identity with a credit card).

Also whenever 1 full year old comes to your account, in the renewal you will have same options for it + now we are enabling PayPal as a renewal or even registration payment for the Windows Store (in markets where PayPal is currently supported).

2) New feedback features: Microsoft is slowly rolling out a program whereby developers can comment on your reviews of their handiwork so you would soon be able to respond to user reviews of their apps and games. Here a funny demonstration

MicrosoftComments[1]

As developer you will receive these kind of notifications where you will be able to control what’s going on with your “open cases”

image

But not only that, is not only for debating personal opinions about the app, Windows Phone users are encouraged to report any questionable developer response via the reporting link in the “details” section of the app’s description:

clip_image002   clip_image004

As a user: remember that your feedback can make the applications you own better as in the end is what you want when you purchase a game or an app.
As a developer: remember that your users own your app because they think that is cool and they like it, they use it, don’t disappoint them and provide them the best quality, the best performance and regular updates.

3) Linking Windows Store and Windows Phone apps to create a universal Windows app

halo universal

Tired to pay twice for the same app? Now with Universal Apps, ‘get once and download for all compatible Windows devices’ customer experience, which we expect to increase both paid and free app downloads across device types.

Also, if you are integrating in-app purchases in your apps, this linked app experience extends your durables and consumables to be used in both stores using the same identifier.

4) App name reservation

Developers now can reserve names for new Windows Phone apps for up to 12 months in advance of release

5) Consolidated price tiers

We have simplified the pricing which is applicable for paid apps and in-app purchasing and expands Windows developer opportunity with the addition of US$0.99 and $1.39 price tiers to Windows Store.

6) Consistent certification policies

7) Reduced certification times: x10 faster!

There it is, we have reduced the app certification workflow time where in most cases the reduction comes to few hours vs. few days (previously).

COMING SOON:

  1. Promotional pricing
  2. Pre-submission validation checks
  3. Touch-enabled device targeting

Summary:

As you can see there are GREAT improvements and changes on both Windows and Windows Phone Store, everything pointing to the same direction, build once, deploy everywhere. You have no excuse to start deploying for Windows Phone and mark your own revenue model and success on the Windows Store!

Happy submission and – May the code be with you –

Sources:

http://www.engadget.com/2014/04/18/microsoft-app-store-developer-responses-roll-out/
http://blogs.windows.com/windows_phone/b/windowsphone/archive/2014/04/17/you-may-soon-get-a-response-to-your-windows-phone-app-review.aspx
http://blogs.windows.com/windows/b/buildingapps/archive/2014/04/14/dev-center-now-open-for-windows-phone-8-1-and-universal-windows-app-submissions.aspx

Blog actualizado v1.2 [19/04/2014]

Informativo

Si eres seguidor de mi blog este mini-post será de tu interés, he realizado una actualización de mantenimiento.

Framework update

Actualización a la versión 0.4.2 de Ghost
Actualización a la versión más reciente del tema (gamma …read more…(read more)

Registration starts – Redmond Interoperability Protocols Plugfest 2014!!

Microsoft-hosted protocol plugfests provide software developers with the opportunity to learn more about the Microsoft protocols and to test their implementations of the Microsoft Open Specifications. Hosted on the Microsoft Redmond campus, each plugfest focuses on a specific task or technology area. Presentations are conducted by Microsoft engineers, who are also available for one-on-one and group discussions and to provide necessary assistance with configuration and running of the interoperability…(read more)

????????IOT?Internet of Things) =Internet of Your Things

今週、サンフランシスコでイベントがあって、モノがつながって連携しデータを蓄積活用するための、M2Mやセンサークラウドなどを管理するためのAzure上のサービス、Azure Intelligent Systems Service、が発表されました。

http://www.InternetOfYourThings.com

キーワードは、Internet of Your Things 貴方のモノからつないでいきましょう的な感じです。各種デバイスを管理するAzure上のサービスとデバイス側のSDKのプレビュー提供が開始されています。

http://blogs.msdn.com/b/windows-embedded/archive/2014/04/15/microsoft-azure-intelligent-systems-service-limited-public-preview-now-available.aspx

お試しください。5月末のde:codeでは、このサービスの一部の情報もセッションで扱おうと思ってます。

サンフランシスコのイベントの模様は、

https://www.microsoft.com/en-us/server-cloud/whats-new.aspx#fbid=YJudAz7bV_f?bid=YJudAz7bV_f

でストリーミングが見れるのでこちらも是非。他に、SQL Server 2014と、Analytics Platform Serviceが発表されてます。

インテルのGalileoでWindowsカーネルが動くとか、.NET Micro Frameworkで動いているデバイスとか、BUILDで披露されていましたが、楽しくなってきましたね。

 

 

Feature comparison: EWS vs. EWS Managed API

Are you a .Net Developer who develop custom application using Exchange Web Services (EWS) Managed API or EWS (Auto-generated proxies)? Then this is for you. The EWS Managed API provides an intuitive interface for developing client applications that use EWS. The API enables unified access to Exchange resources, while using Outlook–compatible business logic. In short, you can use the EWS Managed API to access EWS in versions of Exchange starting with Exchange Server 2007 Service Pack 1 (SP1), including…(read more)

MIDMARKET SOLUTION PROVIDER – April 2014 Readiness Update

New video: Office 365 training

Grow your cloud expertise and help your customers move to the cloud. Learn how to get started with Microsoft Office 365 training—for sales and technical professionals—through this fun new video.

Do you know about Practice Accelerator?

Practice Accelerator sessions, designed for technical consultants and architects, enable you and your organization to increase skills in a specific solution or services area. Learn more about Practice Accelerator through this fun, informative, short video.

ASP.NET issue with auto-generated designer page

I have been facing this issue with VS2013, whenever I change my .aspx file (.NET framework 4.5) with updatePanel/Scriptmanager, designer file generate for that control is:

/// <summary>

        /// ScriptManager1 control.

        /// </summary>

        /// <remarks>

        /// Auto-generated field.

        /// To modify move field declaration from designer file to code-behind file.

        /// </remarks>

        protected global::System.Web.UI.WebControls.ScriptManager ScriptManager1;

       

        /// <summary>

        /// UpdatePanel1 control.

        /// </summary>

        /// <remarks>

        /// Auto-generated field.

        /// To modify move field declaration from designer file to code-behind file.

        /// </remarks>

        protected global::System.Web.UI.WebControls.UpdatePanel UpdatePanel1;

But I have to change it to :

/// <summary>

        /// ScriptManager1 control.

        /// </summary>

        /// <remarks>

        /// Auto-generated field.

        /// To modify move field declaration from designer file to code-behind file.

        /// </remarks>

        protected global::System.Web.UI.ScriptManager ScriptManager1;

        

        /// <summary>

        /// UpdatePanel1 control.

        /// </summary>

        /// <remarks>

        /// Auto-generated field.

        /// To modify move field declaration from designer file to code-behind file.

        /// </remarks>

        protected global::System.Web.UI.UpdatePanel UpdatePanel1;

 

Fix:

1) Either reset the System.Web.UI.WebControls.WebParts.UpdatePanel back to System.Web.UI.UpdatePanel (same for ScriptManager) every time the ascx file is modified…[recommended] but tedious

2) I found that using a Register command at the top of the ASCX file seemed to properly override the default behavior of the designer to pick the 4.0 location for the 3.5 control (I think that is the underlying issue, it is a 4.0 designer backwards compatible with 3.5). [Recommended] but be careful while adding controls with <asp: 

 

<%@ Register TagPrefix=”asp” Namespace=”System.Web.UI” Assembly=”System.Web”%>

 <asp:ScriptManager runat=”server” ID=”smLocationsMap” />

3) You can include system.web.dll, and system.web.design. dll in your bin folder (Security/ Other issues) [Not recommended]

 

I hope this is helpful :)

 

Unit of Work – Expanded

In a previous post I discussed asynchronous repositories. A closely related and complimentary design pattern is the Unit of Work pattern. In this post, I’ll summarize the design pattern and cover a few non-conventional, but useful extensions.

Overview

The Unit of Work is a common design pattern used to manage the state changes to a set of objects. A unit of work abstracts all of the persistence operations and logic from other aspects of an application. Applying the pattern not only simplifies code that possess persistence needs, but it also makes changing or otherwise swapping out persistence strategies and methods easy.

A basic unit of work has the following characteristics:

  • Register New – registers an object for insertion.
  • Register Updated – registers an object for modification.
  • Register Removed – registers an object for deletion.
  • Commit – commits all pending work.
  • Rollback - discards all pending work.

Extensions

The basic design pattern supports most scenarios, but there are a few additional use cases that are typically not addressed. For stateful applications, it is usually desirable to support cancellation or simple undo operations by using deferred persistence. While this capability is covered via a rollback, there is not a way to interrogate whether a unit of work has pending changes.

Imagine your application has the following requirements:

  • As a user, I should only be able to save when there are uncommitted changes.
  • As a user, I should be prompted when I cancel an operation with uncommitted changes.

To satisfy these requirements, we only need to make a couple of additions:

  • Unregister – unregisters pending work for an object.
  • Has Pending Changes - indicates whether the unit of work contains uncommitted items.
  • Property Changed – raises an event when a property has changed.

Generic Interface

After reconsidering what is likely the majority of all plausible usage scenarios, we now have enough information to create a general purpose interface.

public interface IUnitOfWork<T> : INotifyPropertyChanged where T : class
{
    bool HasPendingChanges
    {
        get;
    }
    void RegisterNew( T item );
    void RegisterChanged( T item );
    void RegisterRemoved( T item );
    void Unregister( T item );
    void Rollback();
    Task CommitAsync( CancellationToken cancellationToken );
}

Base Implementation

It would be easy to stop at the generic interface definition, but we can do better. It is pretty straightforward to create a base implementation that handles just about everything except the commit operation.

public abstract class UnitOfWork<T> : IUnitOfWork<T> where T : class
{
    private readonly IEqualityComparer<T> comparer;
    private readonly HashSet<T> inserted;
    private readonly HashSet<T> updated;
    private readonly HashSet<T> deleted;

    protected UnitOfWork()
    protected UnitOfWork( IEqualityComparer<T> comparer )

    protected IEqualityComparer<T> Comparer { get; }
    protected virtual ICollection<T> InsertedItems { get; }
    protected virtual ICollection<T> UpdatedItems { get; }
    protected virtual ICollection<T> DeletedItems { get; }
    public virtual bool HasPendingChanges { get; }

    protected virtual void OnPropertyChanged( PropertyChangedEventArgs e )
    protected virtual void AcceptChanges()
    protected abstract bool IsNew( T item )
    public virtual void RegisterNew( T item )
    public virtual void RegisterChanged( T item )
    public virtual void RegisterRemoved( T item )
    public virtual void Unregister( T item )
    public virtual void Rollback()
    public abstract Task CommitAsync( CancellationToken cancellationToken );

    public event PropertyChangedEventHandler PropertyChanged;
}

Obviously by now, you’ve noticed that we’ve added a few protected members to support the implementation. We use HashSet<T> to track all inserts, updates, and deletes. By using HashSet<T>, we can easily ensure we don’t track an entity more than once. We can also now apply some basic logic such as inserts should never enqueue for updates and deletes against uncommitted inserts should be negated. In addition, we add the ability to accept (e.g. clear) all pending work after the commit operation has completed successfully.

Supporting a Unit of Work Service Locator

Once we have all the previous pieces in place, we could again stop, but there are multiple ways in which a unit of work could be used in an application that we should consider:

  • Imperatively instantiated in code
  • Composed or inserted via dependency injection
  • Centrally retrieved via a special service locator facade

The decision as to which approach to use is at a developer’s discretion. In general, when composition or dependency injection is used, the implementation is handed by another library and some mediating object (ex: a controller) will own the logic as to when or if entities are added to the unit of work. When a service locator is used, most or all of the logic can be baked directly into an object to enable self-tracking. In the rest of this section, we’ll explore a UnitOfWork singleton that plays the role of a service locator.

public static class UnitOfWork
{
    public static IUnitOfWorkFactoryProvider Provider
    {
        get;
        set;
    }
    public static IUnitOfWork<TItem> Create<TItem>() where TItem : class
    public static IUnitOfWork<TItem> GetCurrent<TItem>() where TItem : class
    public static void SetCurrent<TItem>( IUnitOfWork<TItem> unitOfWork ) where TItem : class
    public static IUnitOfWork<TItem> NewCurrent<TItem>() where TItem : class
}

Populating the Service Locator

In order to locate a unit of work, the locator must be backed with code that can resolve it. We should also consider composite applications where there may be many units of work defined by different sources. The UnitOfWork singleton is configured by supplying an instance to the static Provider property.

Unit of Work Factory Provider

The IUnitOfWorkFactoryProvider interface can simply be thought of as a factory of factories. It provides a central mechanism for the service locator to resolve a unit of work via all known factories. In composite applications, implementers will likely want to use dependency injection. For ease of use, a default implementation is provided whose constructor accepts Func<IEnumerable<IUnitOfWorkFactory>>.

public interface IUnitOfWorkFactoryProvider
{
    IEnumerable<IUnitOfWorkFactory> Factories
    {
        get;
    }
}

Unit of Work Factory

The IUnitOfWorkFactory interface is used to register, create, and resolve units of work. Implementers have the option to map as many units of work to a factory as they like. In most scenarios, only one factory is required per application or composite component (ex: plug-in). A default implementation is provided that only requires the factory to register a function to create or resolve a unit of work for a given type. The Specification pattern is used to match or select the appropriate factory, but the exploration of that pattern is reserved for another time.

public interface IUnitOfWorkFactory
{
    ISpecification<Type> Specification
    {
        get;
    }
    IUnitOfWork<TItem> Create<TItem>() where TItem : class;
    IUnitOfWork<TItem> GetCurrent<TItem>() where TItem : class;
    void SetCurrent<TItem>( IUnitOfWork<TItem> unitOfWork ) where TItem : class;
}

Minimizing Test Setup

While all of factory interfaces make it flexible to support a configurable UnitOfWork singleton, it is somewhat painful to set up test cases. If the required unit of work is not resolved, an exception will be thrown; however, if the test doesn’t involve a unit of work, why should we have to set one up?

To solve this problem, the service locator will internally create a compatible uncommitable unit of work instance whenever a unit of work cannot be resolved. This behavior allows self-tracking objects to be used without having to explicitly set up a mock or stub unit of work. You might be thinking that this behavior hides composition or dependency resolution failures and that is true. However, any attempt to commit against these instances will throw an InvalidOperationException, indicating that the unit of work is uncommitable. This approach is the most sensible method of avoiding unnecessary setups, while not completely hiding resolution failures. Whenever a unit of work fails in this manner, a developer should realize that they have not set up their test correctly (ex: verifying commit behavior) or resolution is failing at run time.

Examples

The following outlines some scenarios as to how a unit of work might be used. For each example, we’ll use the following model:

public class Person
{
    public int PersonId { get; set;}
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

Implementing a Unit of Work with the Entity Framework

The following demonstrates a simple unit of work that is backed by the Entity Framework:

public class PersonUnitOfWork : UnitOfWork<Person>
{
    protected override bool IsNew( Person item )
    {
        // any unsaved item will have an unset id
        return item.PersonId == 0;
    }
    public override async Task CommitAsync( CancellationToken cancellationToken )
    {
        using ( var context = new MyDbContext() )
        {
            foreach ( var item in this.Inserted )
                context.People.Add( item );

            foreach ( var item in this.Updated )
                context.People.Attach( item );

            foreach ( var item in this.Deleted )
                context.People.Remove( item );

            await context.SaveChangesAsync( cancellationToken );
        }
        this.AcceptChanges();
    }
}

Using a Unit of Work to Drive User Interactions

The following example illustrates using a unit of work in a rudimentary Windows Presentation Foundation (WPF) window that contains buttons to add, remove, cancel, and apply (or save) changes to a collection of people. The recommended approach to working with presentation layers such as WPF is to use the Model-View-View Model (MVVM) design pattern. For the sake of brevity and demonstration purposes, this example will use simple, albeit difficult to test, event handlers. All of the persistence logic is contained within the unit of work and the unit of work can report whether it has any pending work to help inform a user when there are changes. The unit of work can also be used to verify that the user truly wants to discard uncommitted changes, if there are any.

public partial class MyWindow : Window
{
    private readonly IUnitOfWork<Person> unitOfWork;
    public MyWindow() : this( new PersonUnitOfWork() ) { }
    public MyWindow( IUnitOfWork<Person> unitOfWork )
    {
        this.InitializeComponent();
        this.ApplyButton.IsEnabled = false;
        this.People = new ObservableCollection<Person>();
        this.unitOfWork = unitOfWork;
        this.unitOfWork.PropertyChanged +=
            ( s, e ) => this.ApplyButton.IsEnabled = this.unitOfWork.HasPendingChanges;
    }
    public Person SelectedPerson { get; set; }
    public ObservableCollection<Person> People { get; private set; }
    private void AddButton_Click( object sender, RoutedEventArgs e )
    {
        var person = new Person();
        // TODO: custom logic
        this.People.Add( person );
        this.unitOfWork.RegisterNew( person );
    }
    private void RemoveButton_Click( object sender, RoutedEventArgs e )
    {
        var person = this.SelectedPerson;
        if ( person == null ) return;
        this.People.Remove( person );
        this.unitOfWork.RegisterRemoved( person );
    }
    private async void ApplyButton_Click( object sender, RoutedEventArgs e )
    {
        await this.unitOfWork.CommitAsync( CancellationToken.None );
    }
    private void CancelButton_Click( object sender, RoutedEventArgs e )
    {
        if ( this.unitOfWork.HasPendingChanges )
        {
            var message = “Discard unsaved changes?”;
            var title = “Save”;
            var buttons = MessageBoxButton.YesNo;
            var answer = MessageBox.Show( message, title, buttons );
            if ( answer == DialogResult.No ) return;
            this.unitOfWork.Rollback();
        }
        this.Close();
    }
}

Implementing a Self-Tracking Entity

There are many different ways and varying degrees of functionality that can be implemented for a self-tracking entity. The following is one of many possibilities that illustrates just enough to convey the idea. The first thing we need to do is create a factory.

public class MyUnitOfWorkFactory : UnitOfWorkFactory
{
    public MyUnitOfWorkFactory()
    {
        this.RegisterFactoryMethod( () => new PersonUnitOfWork() );
        // additional units of work could be defined here
    }
}

Then we need to wire up the service locator with a provider that contains the factory.

var factories = new IUnitOfWorkFactory[]{ new MyUnitOfWorkFactory() };
UnitOfWork.Provider = new UnitOfWorkFactoryProvider( () => factories );

Finally, we can refactor the entity to enable self-tracking.

public class Person
{
    private string firstName;
    private string lastName;

    public int PersonId
    {
        get;
        set;
    }
    public string FirstName
    {
        get
        {
            return this.firstName;
        }
        set
        {
            this.firstName = value;
            UnitOfWork.GetCurrent<Person>().RegisterChanged( this );
        }
    }
    public string LastName
    {
        get
        {
            return this.lastName;
        }
        set
        {
            this.lastName = value;
            UnitOfWork.GetCurrent<Person>().RegisterChanged( this );
        }
    }
    public static Person CreateNew()
    {
        var person = new Person();
        UnitOfWork.GetCurrent<Person>().RegisterNew( person );
        return person;
    }
    public void Delete()
    {
        UnitOfWork.GetCurrent<Person>().RegisterRemoved( this );
    }
    public Task SaveAsync()
    {
        return UnitOfWork.GetCurrent<Person>().CommitAsync( CancellationToken.None );
    }
}

Conclusion

In this article we examined the Unit of Work pattern, added a few useful extensions to it, and demonstrated some common uses cases as to how you can apply the pattern. There are many implementations for the Unit of Work pattern and the concepts outlined in this article are no more correct than any of the alternatives. Hopefully you finish this article with a better understanding of the pattern and its potential uses. Although I didn’t explicitly discuss unit testing, my belief is that most readers will recognize the benefits and ease in which cross-cutting persistence requirements can be tested using a unit of work. I’ve attached all the code required to leverage the Unit of Work pattern as described in this article in order to accelerate your own development, should you choose to do so.

Using SSIS to Backup and Restore Extremely Large OLAP Databases

Working in the field of Business Intelligence I get the opportunity to work with some really large (read that as multi-terabyte) OLAP databases. Multi-terabyte OLAP databases, while not yet common place, are being seen with greater frequency and they do present a few interesting challenges to developers and administrators. Performance tuning is one of the more obvious challenges, leading to discussions related to selection of the most appropriate storage mode and how to best partition the data. Purely from the perspective of query performance, MOLAP storage is going to provide better performance than would be expected with HOLAP or ROLAP storage. At these sizes, I/O throughput is a concern and there are definite benefits to using the StorageLocation property to distribute partition data across multiple disks.

A less obvious aspect of performance tuning involves the ability to backup and restore these large databases before Mr. Murphy applies his law and it becomes necessary to recover from some form of disaster. Databases can become inaccessible for a number of reasons, including but not limited to hardware failures and BI developers with administrator permissions fully processing dimensions. The prospect of failure and the need to recover from a disaster gives administrators of OLAP databases some really good reasons to be involved in planning and testing of backup and recovery operations.

For reasons I’ll explore in a bit, Analysis Services backup and restore operations on multi-terabyte databases do not occur with lightning speed. Likewise, if it becomes necessary to execute a full process of a multi-terabyte database there’s a pretty good chance that it’s going to take more than just a few hours. Just imagine the fun of explaining to the company CEO, CFO, and CIO that you’ll have their production database back online in a couple of weeks. Therefore, the ability to restore a really big database to a functional state within a reasonable period of time is probably more important than the ability to create a backup in a reasonably short period of time. At least the discussion with the CEO, CFO, and CIO will be substantially less painful if you can say something to the effect of “We should have the database fully restored in a few hours.”

I recently had the opportunity to work with a customer who had a 35 hour window in which to create a backup of an OLAP database that occupied nearly three terabytes on disk. Because of the size of the database, the partitioning strategy involved distributing the data across eight separate LUNs. The reason for the 35 hour window was that a processing job was scheduled to begin execution on Sunday evenings at 11:00 PM. Because the database was in use, the backup job was scheduled to begin on Saturdays at noon. The problem was that the backup job would be terminated after 35 hours when the scheduled processing operation began executing. Since this was a production system, it was absolutely essential that the customer have a database backup and a plan to restore the data in the event of a disaster.

In the absence of a database backup, the alternative was to have the database remain offline for a period of nine (9) days to allow the database to be fully processed.

So what are the options for implementing backup and recovery with very large OLAP databases as well as the drawbacks of each approach? Let’s take a look the following options:

  1. SAN Snapshot
  2. Backup and Restore
  3. Re-deploy and fully process
  4. Synchronization

 

1.     SAN Snapshot: This is an option that has been explored in several scale-out scenarios solely for the purpose of moving metadata and data files from a dedicated processing server to one or more query servers. (see Carl Rabeler’s whitepaper entitled “Scale-Out Querying with Analysis Services Using SAN Snapshots” http://www.microsoft.com/en-us/download/details.aspx?id=18676).  SAN Snapshots are fantastic for backing up one or more LUNs for storage locally or at another site. There are essentially two types of SAN Snapshot. The While this is greatly simplified, it’s sufficient to know that a Copy-on-Write snapshot creates a snapshot of changes to stored data and affords rapid recovery of data. A Split-Mirror snapshot references all of the data on a set of mirrored drives. A Split-Mirror snapshot grabs a snapshot of the entire volume, which simplifies the recovery process. There are, however, some issues inherent in using SAN Snapshots to create “Backups” of databases.  

 Full recovery using a Copy-on-Write snapshot requires that all previous snapshots be available. Creation of Split-Mirror snapshots tends to be slower than Copy-on-Write snapshots and the storage requirements tend to increase over time. In either case, the storage requirements eventually would become rather onerous since there is no compression. Another consideration is that in order to generate a recoverable snapshot, all of the I/O on the server would have to be stopped. Add to that it’s very likely that the SAN Snapshots will fall to the purview of someone known as a SAN or Storage administrator and not a DBA. (Somehow, I rather doubt that the SAN or Storage Admin will be catching a lot of heat when a critical database goes belly up.) An important consideration is that each SAN vendor implements SAN Snapshots in a somewhat different manner. In this case a SAN Snapshot was merely an academic discussion given that full snapshots of several volumes, none of which were mirrored, would have been required.

 2.       Backup and Restore. This is the most readily available approach to backing up and restoring an Analysis Services database as the Backup/Restore functionality is readily available in the product. This functionality can be readily accessed using the GUI in SQL Server Management Studio, using XMLA commands like the following to create a backup:

<Backup xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine”>

  <Object>

    <DatabaseID>Adventure Works</DatabaseID>

  </Object>

  <File>C:PUBLICAWDEMO.ABF</File>

</Backup>

 with the following XMLA command to restore:

<Restore xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine”>

  <File>C:PUBLICAWDEMO.ABF</File>

  <DatabaseName>Adventure Works</DatabaseName>

</Restore>

implemented via the AMO API using code similar to the following to create a backup:

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using Microsoft.AnalysisServices;

 

namespace BKUP

{

  class Program

  {

    static void Main(string[] args)

    {

      Server asServer = new Server();

      asServer.Connect(“localhost”);

      Database asDB = asServer.Databases.FindByName(“Adventure Works”);

      asDB.Backup(“C:\PUBLIC\ASDEMO.ABF”);

      asServer.Disconnect();

      asServer.Dispose();

    }

  }

}

and the following AMO code to restore the database

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using Microsoft.AnalysisServices;

 

namespace RESTORE

{

  class Program

  {

    static void Main(string[] args)

    {

      Server asServer = new Server();

      asServer.Connect(“localhost”);

      asServer.Restore(“C:\PUBLIC\ASDEMO.ABF”,”Adventure Works”,true);

      asServer.Disconnect();

      asServer.Dispose();

    }

  }

}

 All three are relatively straight forward approaches to backup and restore, using functionality that is built into the product and make this option very appealing. However, there are a few aspects of the native Backup/Restore functionality that make it problematic with extremely large databases. One factor that becomes an extremely important consideration is that the processes for Analysis Services Backup and Restore operations are fixed at three (yes, you read that right as 3) threads. The net result is that the native backup/restore operations are roughly comparable in performance to standard file copy operations. While it would be a nice feature to have, Analysis Services doesn’t have functionality similar to the Differential Backups that are available in the SQL Server database engine. In this case, it was known that the database backup was being terminated without completing at 35 hours, so this was obviously not an option. Even if native backup had been an option, we knew that the restore operation would require more than 35 hours making this a non-viable option.

 

3.       Redeploy and fully process. This is obviously one solution, which requires nothing more than having either a copy of the database, in its current state, as a project or the XMLA script to re-create the database. On the positive side, the metadata would be pristine. One slight problem with this approach is that fully processing a multi-terabyte database is typically going to require multiple days, if not weeks, to complete. The amount of time required to fully process a large mission critical database is probably not going to make this an acceptable approach to disaster recovery. In this particular case, fully processing the database would have taken in excess of nine (9) days to complete, so that discussion with the CEO, CFO, and CIO would have been something less than pleasant and cordial.

 

4.       Use Synchronization: Synchronization is another functionality that is natively available in Analysis Services. The product documentation indicates that it can be used to either deploy a database from a staging server to a production server or to synchronize a database on a production server with changes made to the database on a staging server. In any event, the Synchronize functionality does allow an administrator to effectively create and periodically update a copy of an Analysis Services database on another server. This copies the data and metadata from a source server to a destination server. One of the benefits is that the database remains accessible on the destination server, affording users the ability to continue executing queries on the destination server until the synchronization completes and queries are then executed against the newly copied data. Much like Backup/Restore, the Synchronization functionality is readily available in the product and can accessed using the GUI in SQL Server Management Studio or using XMLA commands like the following to the following:

 <Synchronize xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine”>

  <Source>

    <ConnectionString>Provider=MSOLAP;Data Source=MyServer;Integrated Security=SSPI</ConnectionString>

    <Object>

      <DatabaseID>Adventure Works DW 2012 – EE</DatabaseID>

    </Object>

  </Source>

  <Locations />

  <SynchronizeSecurity>SkipMembership</SynchronizeSecurity>

  <ApplyCompression>true</ApplyCompression>

</Synchronize>

Using Synchronization for the purpose of creating a “backup” from a production system requires that a destination server that is at least the same service pack level as the Source production server (ideally one would want the identical build number). Using Synchronization also requires that the server being used as the destination have storage capacity equivalent to the source (in this case Production) server. The only server available as a possible destination server was the development server that was being used to make, test, and then push modifications to the design of the database on the production server. For some strange reason, the team doing the database development/modification work had some pretty strong reservations about overwriting the work they were doing in the development environment.

Those were the options that were considered and unfortunately, for one reason or other, none were acceptable. That meant that it was time to start getting creative. I knew that Care Rabeler had done some work with copying databases for a scale out solution, but that was using SAN Snapshots. I was also very aware of a Technet article by Denny Lee and Nicholas Dritsas (http://technet.microsoft.com/library/Cc966449) related to a scale out solution using a SQL Server Integration Services (SSIS) package with Robocopy to copy metadata and data files from multiple databases from a processing server to the data folders of a group of query servers.

Armed with an idea and some information (I know, it’s a dangerous combination) related to using a single instance of the multi-threaded version of Robocopy, it seemed like the beginnings of a pretty tantalizing solution. Rather than copy the entire data directory, all that was really necessary was to copy the data for a single database. The initial plan was to detach the database, use Robocopy to move the data to a “safe” storage location and then re-attach the database. Sounded simple enough, except for a slight complicating factor. The database was nearly three terabytes in size and the data were distributed across eight LUNs.

Detaching and re-attaching an Analysis Services database is a relatively trivial matter that can be easily accomplished from SSMS, but since this was a job that should be scheduled to run on a weekend there was a strong desire to automate the process as much as possible. Building on the prior use of SSIS with Robocopy by Denny Lee and Nicholas Dritsas, it was decided to use an SSIS package to contain and automate the entire process. This had several advantages.

1.       This would allow the database to be detached then on success of that operation begin the copy operation

2.       Since the data were distributed across eight drives, it would be possible to execute eight instances of Robocopy in parallel (one instance for each drive containing data).

3.       Since only the data on each drive was required, it wasn’t necessary to copy the contents of the entire drive which allowed copying a single directory and the subdirectories it contained.

4.       Since there were eight LUNs from which data were being copied, it made sense to copy data to eight separate LUNs on a storage server to avoid significant disk I/O contention on the target server.

5.       The on completion precedence constraints on the robocopy tasks could be combined with an AND condition so that the database would be re-attached only after all of the data had been copied to a storage location.

A very simple command line utility that could be used to detach or attach a database was really all that was required. Since there wasn’t such a utility readily available, it was time to put on the developer hat and start slinging  little bit of code. That effort resulted in the following application code:

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using Microsoft.AnalysisServices;

 namespace DropAdd

{

    class Program

    {

        static int Main(string[] args)

        {

            int returnval=0;

            switch (args.Count().ToString())

            {

                case “2″:

                    {

                        string servername = args[0].ToString().ToUpper().Trim();

                        string databasename = args[1].ToString().ToUpper().Trim();

                        ServerApp DetachIt = new ServerApp();

                        returnval = DetachIt.Detach(servername, databasename);

                        break;

                    }

                case “3″:

                    {

                        string servername = args[0].ToString().ToUpper().Trim();

                        string filepathname = args[1].ToString().ToUpper().Trim();

                        string databasename = args[2].ToString().ToUpper().Trim();

                        ServerApp AttachIt = new ServerApp();

                        returnval = AttachIt.Attach(servername, filepathname, databasename);

                        break;

                    }

                default:

                    {

                        Console.WriteLine(“Incorrect number of parameters”);

                        Console.WriteLine(“dropadd server_name database_name”);

                        Console.WriteLine(“dropadd server_name file_path database_name”);

                        Console.ReadLine();

                        returnval = 0;

                        break;

                    }

            }

            return returnval;

        }

    }

 

    class ServerApp

    {

        public int Attach(string ServerName, string FilePathName, string DatabaseName)

        {

            Server asServer = new Server();

            int outcome = 0;

            asServer.Connect(ServerName.ToString().Trim());

            try

            {

                Database AsDB = asServer.Databases.FindByName(DatabaseName.ToString().Trim());

                if (AsDB != null)

                {

                    outcome = 0;

                }

                else

                {

                    asServer.Attach(FilePathName.ToString().Trim());

                    outcome = 1;

                }

            }

            catch (Exception goof)

            {

                outcome = 0;

            }

            finally

            {

                asServer.Disconnect();

                asServer.Dispose();

            }

 

            return outcome;

        }

 

        public int Detach(string ServerName, string DatabaseName)

        {

            Server asServer = new Server();

            int outcome = 0;

            asServer.Connect(ServerName.ToString().Trim());

            try

            {

                Database AsDB = asServer.Databases.FindByName(DatabaseName.ToString().Trim());

                if (AsDB != null)

                {

                    AsDB.Detach();

                    outcome = 1;

                }

            }

            catch (Exception goof)

            {

                outcome = 0;

            }

            finally

            {

                asServer.Disconnect();

                asServer.Dispose();

            }

            return outcome;

        }

    }

}

 

Using that code, all that was necessary to detach a database was execute the DropAdd command line utility, passing the Server Name and Database Name as parameters. When it became necessary to attach a database, it was just a matter of executing the DropAdd command line utility passing the Server Name, File path to the database, and Database name as parameters.

Having addressed both detaching and re-attaching the database, it was necessary to consider how to best use Robocopy to move the data from the production server to a storage location. A small scale test using robocopy with the default threading option of 8, worked reasonably well. But since the design of the database distributed data across eight LUNS, it would be necessary to execute robocopy once for each LUN on which data were stored. Running eight instances of robocopy in serial would be a bit time consuming and quite honestly it was suspected that doing so would run well past the 35 hour window for backup creation. An associated problem was determining when the last instance of Robocopy had completed execution. That lead to a decision to execute eight instances of Robocopy in parallel.

 

The result was the design of an SSIS package looking something like the following:

The SSIS package simply consisted of a set of 10 Execute Process tasks, with the following components:

Detach Database                Detach the database from the server

Robocopy Data Dir              Copy the data from the database directory

Robocopy G                     Copy data from the Data Directory on the G drive

Robocopy H                     Copy data from the Data Directory on the H drive      

Robocopy I                     Copy data from the Data Directory on the I drive

Robocopy J                     Copy data from the Data Directory on the J drive

Robocopy K                     Copy data from the Data Directory on the K drive

Robocopy L                     Copy data from the Data Directory on the L drive

Robocopy M                     Copy data from the Data Directory on the M drive

Attach Database                Attached the database on completion of the copies

 

For the “Detach Database” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:CustomAppsDroppAdd.exe

Arguments

MyServer “My Big Database”

FailTaskIfReturnCodeIsNotSuccessValue

True

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Success

 

In order to ensure that the data could be copied to a “safe” storage location, it is absolutely essential that the database be detached from the server in order to prevent write operations from processing which could result in files on the destination storage location becoming corrupt.

 

For the “Robocopy Data Dir” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“C:Program FilesMicrosoft SQL ServerMSAS10_50.MSSQLSERVEROLAPDataMy Big Database.17.db”

“\StorageServere$Main” /S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Robocopy G” task, the following properties were set Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“G:Program FilesMicrosoft SQL Server OLAPData ”

“\StorageServerG$G_Drive” /S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Robocopy H” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“H:Program FilesMicrosoft SQL Server OLAPData ”

“\StorageServerH$H_Drive” /S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Robocopy I” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“I:Program FilesMicrosoft SQL Server OLAPData ”

“\StorageServerI$I_Drive” /S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Robocopy J” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“J:Program FilesMicrosoft SQL Server OLAPData ”

“\StorageServerJ$J_Drive” /S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Robocopy K” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“K:Program FilesMicrosoft SQL Server OLAPData ”

“\StorageServerK$K_Drive” /S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Robocopy L” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“L:Program FilesMicrosoft SQL Server OLAPData ”

“\StorageServerL$L_Drive” /S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Robocopy M” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“M:Program FilesMicrosoft SQL Server OLAPData ”

“\StorageServerM$M_Drive” /S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Attach Database” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:CustomAppsDroppAdd.exe

Arguments

MyServer

“C:Program FilesMicrosoft SQL ServerMSAS10_50.MSSQLSERVEROLAPDataMy Big Database.17.db”

“My Big Database”

FailTaskIfReturnCodeIsNotSuccessValue

True

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

None

 

This all assumed that the account that would be executing the job had Administrator permissions for the Analysis Services service, as well as sufficient permissions to access files on each of the drives that contained data. It also required that the account have sufficient permissions to write to the destination drives that were being used to store the files of what would become an uncompressed backup.

It seemed prudent to compare performance with the built-in backup functionality, so being an intrepid soul, I decided to test it out with a version of the Adventure Works database which had been modified to distribute partitions relatively evenly across eight logical drives. The native Backup functionality required 55 seconds to create the backup on an 8 core machine with 24 Gb of RAM. Feeling very confident with the newly minted solution, it was a bit disappointing to find that it took right at 53 seconds to create the backup. However, since Robocopy can be used to copy only changed files, it was decided to process three or four partitions then run the comparison test again. This time, backup again required 55 seconds but the SSIS solution completed in 11 seconds. A good indication that even though a full “backup” of the multi-terabyte database may not be achieved on the first execution there was an extremely good chance that we would have a complete copy following a second execution of the SSIS package.

That meant it was time for the acid test to see how well this solution would perform in the production environment. When the backup window opened, the SSIS package was executed. Approaching the 35 hour mark, the SSIS package had not yet completed execution so it was decided to terminate the package and run the “Attach Database” task to re-attach the database. Somewhat disappointing, but it was encouraging to find that the formerly empty E drive now contained approximately 2.5 Terabytes of data so it was not a total failure. On that basis, it decided to leave the solution in place and allow it to run during the next “backup” window.

When the next backup window opened, the SSIS package began executing, and it was extremely encouraging to find that it completed in seven hours. Checking the E drive, it now contained nearly three terabytes of data. The first thought was “SUCCESS” and now it’s time for a nice cold beer. Of course the second thought was something to the effect of “OK, what happens when one of the disks goes belly up or one of the developers does a full process on one of the dimensions.” Followed by “we have a Backup solution and an uncompressed Backup but no way to restore it.”

Time to go back to work to build another SSIS package that could be used to restore the database.  But since we had a “backup” solution, the restore would be simple. It was just a matter of reverse engineer the “backup” solution. This task, however, would be simpler since we would be able to recycle the logic and some of the bits used to create the “backup” solution. It was known that the DropAdd code could be re-used to detach and attach the database. It was also a relatively trivial matter to simply change the order of the parameters passed to the tasks that executed robocopy. Designing a process to restore the database presented a new challenge, in the form of “What happens in the case of a total system failure and it becomes necessary to restore the database to a new but identically configured server?” That would require creating a directory that would contain the database. The result was an SSIS package similar to what you see below:

 

The “Restore” SSIS package consisted of a set of 1 File Connection Manager, 1 File System Task and 10 Execute Process tasks, with the following components:

MyFileConnection                           File Connection Manager

Create Database Directory                  Create the Database Directory if it did not exist

Detach Database                            Detach the database from the server

Restore Main Data Directory                Copy data from backup to the database directory

Restore drive G Data                       Copy data from backup to the G drive

Restore drive H Data                       Copy data from backup to the H drive   

Restore drive I Data                       Copy data from backup to the I drive

Restore drive J Data                       Copy data from backup to the J drive

Restore drive K Data                       Copy data from backup to the K drive

Restore drive L Data                       Copy data from backup to the L drive

Restore drive M Data                       Copy data from backup to the M drive

Attach Database                            Re-attach the database on completion of the copies

 

For the “MyFileConnection” File Connection Manager set the following properties on the Process tab:

Property                             

Value

UsageType

Create Folder

Folder

C:Program FilesMicrosoft SQL ServerMSAS10_50.MSSQLSERVEROLAPDataMy Big Database.17.db

 

 

For the “Create Database Directory” task, the following properties were set on the Process tab:

Property                             

Value

UseDirectoryIfExists

True

Name

Create Database Directory

Description

File System Task

Operation

Create Directory

IsSourcePathVariable

False

SourceConnection

MyFileConnection

WindowStyle

Hidden

 

Precedence Constraint

Success

 

For the “Detach Database” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:CustomAppsDroppAdd.exe

Arguments

MyServer “My Big Database”

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Success

 

In this case, it wasn’t really desirable to require success of the detach operation, given that one possible scenario was that the database had never existed on the server and the database was being restored to a “clean” environment.

 

For the “Restore Main Data Directory” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“\StorageServere$Main”

“C:Program FilesMicrosoft SQL ServerMSAS10_50.MSSQLSERVEROLAPDataMy Big Database.17.db”

/S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Restore drive G Data” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“\StorageServerG$G_Drive”

“G:Program FilesMicrosoft SQL Server OLAPData ”

/S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Restore drive H Data” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“\StorageServerH$H_Drive”

“H:Program FilesMicrosoft SQL Server OLAPData ”

/S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Restore drive I Data” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“\StorageServerI$I_Drive”

“I:Program FilesMicrosoft SQL Server OLAPData ”

/S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Restore drive J data” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“\StorageServerJ$J_Drive”

“J:Program FilesMicrosoft SQL Server OLAPData ”

/S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Restore Drive K Data” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“\StorageServerK$K_Drive”

“K:Program FilesMicrosoft SQL Server OLAPData ”

/S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Restore drive L Data” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“\StorageServerL$L_Drive”

“L:Program FilesMicrosoft SQL Server OLAPData ”

/S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Restore drive M Data” task, the following properties were set on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:WindowsSystem32Robocopy.exe

Arguments

“\StorageServerM$M_Drive”

“M:Program FilesMicrosoft SQL Server OLAPData ”

/S /PURGE

FailTaskIfReturnCodeIsNotSuccessValue

False

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

Completion

 

For the “Attach Database” task, we set the following properties on the Process tab:

Property                             

Value

RequiredFullFileName

True

Executable

C:CustomAppsDroppAdd.exe

Arguments

MyServer

“C:Program FilesMicrosoft SQL ServerMSAS10_50.MSSQLSERVEROLAPDataMy Big Database.17.db”

“My Big Database”

FailTaskIfReturnCodeIsNotSuccessValue

True

SuccessValue

1

TimeOut

0

WindowStyle

Hidden

 

Precedence Constraint

None

 

For due diligence, it was decided to test the “Restore/Disaster Recovery” package using the same version of Adventure Works that was used for initial testing of the “Backup” package. It was not entirely surprising that copying and attaching the database in that scenario was comparable to restoring from a backup. To test a recovery scenario, it was decided to create a new “backup” using the backup SSIS package, execute a ProcessFull on the Customers dimension and then run the SSIS package to restore the database. It was very encouraging to find that the database was restored to full functionality in roughly 10 seconds.

It didn’t take a long time to receive a painful reminder of the reason that a disaster recovery strategy is important, especially with extremely large databases. Shortly after both the “Backup” and “Restore” SSIS packages were completed, one of the developers on the team managed to accidentally execute a ProcessFull on one of the dimensions used in all of the cubes contained in the database. At this point, there was a choice to be made. Fully processing the database would require a minimum of 9 days, and quite probably longer than that. The “Restore” SSIS package had undergone limited testing but the testing that had been done was extremely encouraging. Ultimately, the “Restore” SSIS package was run and roughly eight (yes, 8) hours later a fully functional multi-terabyte production database was back online.