问题描述:

Is it possible to refactor TableEntities used by Azure Table Storage? For example, consider the following entity:

class MyEntity: TableEntity

{

public string SomeID { get; set; }

}

Is it possible to maintain values when doing the refactorings like:

  • Renaming SomeID to SomeOtherID?
  • Changing the property type from string to Guid? (considering all existing values are actual GUIDs)

If yes, what is the recommended approach to handling those schema changes in a consistent manner, like migrations for EF6?

网友答案:

One of the options that you could explore is to use a custom EntityProperty resolver.

If you change the name of a property in your class from SomeID to SomeOtherID and insert the entities with the new name to table storage, you will have entities with SomeID and/or SomeOtherID fields in table storage.

When you query them back you can provide a custom EntityProperty Resolver delegate that will be used by the storage SDK to create from the raw property dictionary your concrete entity type. In that delegate you can put in logic to handle this scenario to create the actual type you want.

The overloaded ExecuteQuery method in CloudTable takes the EntityProperty resolver:

public virtual IEnumerable<TResult> ExecuteQuery<TResult>(
    TableQuery query,
    EntityResolver<TResult> resolver,
    TableRequestOptions requestOptions = null,
    OperationContext operationContext = null
)

And EntityProperty resolver is a delegate where you decide how to construct your strong type entity from property dictionary:

public delegate T EntityResolver<T>(
    string partitionKey,
    string rowKey,
    DateTimeOffset timestamp,
    IDictionary<string, EntityProperty> properties,
    string etag
);

So in this delegate you put code to handle kvp s with keys SomeID and SomeOtherID while constructing your T type return value.

You could also use the same to handle type changes. Insert with the new schema and changed property type and when you read it back handle these in your EntityProperty resolver.

I would still recommend a data migration to the new data model instead of though maintaining custom resolvers. The custom resolvers may help you while you are in the middle of data migration process and still serving requests while you are in that transition stage.

网友答案:

I don't think there is a way to do this from Azure Storage architecture. What you can do is just to read entities and update them one by one (or batch by batch using EntityGroupTransaction).

相关阅读:
Top