What Does Durable Azure Functions Solve?

Standard

Azure Functions are the way serverless compute is implemented in Azure. Amazon Web Services equivalent is called AWS Lambda. I won’t go into detail about what serverless is and what advantages it provides in this post. You can find my learnings from a project where I implemented my web api using Azure Functions here.

I will be discussing the Azure Functions V2 (Still in preview at the time of writing this blog) and in particularly the newly introduced durable functions. For a quick comparison of the V1 and V2 runtimes have a look here.

What are “Durable Functions” in Azure?

Durable functions are an extension to Azure Functions which allows you to write stateful functions. The state management is abstracted away through an async/await and allows you to write orchestration logic in code. The runtime makes sure the state is durable even if the VM running the function in restarted or the function gets recycled.

Before going into detail, let’s see what problem durable functions promises to solve. After which you will have a good understanding where this fits in solution design.

Take this simple example.

function-chaining

We have 4 functions that chain the output from the previous one as the input for the next. This type of orchestration in common in function design.

Before durable functions, there were a couple of ways of solving this.

  1. Have a seperate orchestrator function and let it handle the workflow.
    • The orchestrator functions needs to be running the entire duration of the process. This ends up costing more because we have two functions running at the same time.
    • If the orchestrator function gets recycled or the VM restarted then the current state is lost
  2. Have queues in between the functions and have the functions be triggered by a queue message.
    • Requires a lot of queues and managing the connections/triggers between functions.

When faced with the function chaining problem, I’ve generally gone with the approach of using queues in the past. Whilst it was painful to do it that way, I got the benefit of the runtime handling situations where a function was recycled and the queue message had to be replayed.

Where does durable functions fit in this scenario?

Durable functions takes away the problems we had in solution 1. The orchestrator function sleeps while waiting for a child function output and wakes up automatically once an output is received. The runtime also manages the state so we don’t have to worry about function recycling or VM restarts.

How does the code look like?


public static async Task Run(DurableOrchestrationContext ctx)
{
    try
    {
/* 

inputs - Orchestration functions support only DurableOrchestrationContext as a
parameter type. Deserialization of inputs directly in the function signature is
not supported. Code must use the GetInput method to fetch orchestrator
function inputs. These inputs must be JSON-serializable types.

outputs - Orchestration triggers support output values as well as inputs. The
return value of the function is used to assign the output value and must be
JSON-serializable. If a function returns Task or void, a null value will be
saved as the output.

*/
        var someInput = context.GetInput();

        var outputF1 = await ctx.CallActivityAsync("Function1", someInput);
        var outputF2 = await ctx.CallActivityAsync("Function2", outputF1);
        var outputF3 = await ctx.CallActivityAsync("Function3", outputF2);
        var outputF4 = await ctx.CallActivityAsync("Function4", outputF3);

        // Log output
        return outputF4;
    }
    catch (Exception)
    {
        // error handling/compensation goes here
    }
}

How does it do it? Magic?

Not really. 🙂 It uses a cloud design pattern called Event Sourcing. I’ve got a series of blog posts about event sourcing if you’re interested.

Durable functions is built on top of the Durable Task Framework. When you await the DurableOrchestrationContext, it writes to a ‘history table’ and exits the function. When an output is ready, it re-runs the function to the point of the await (checkpoint) and injects the value in. Underneath it uses queues to trigger the awakening but that’s all abstracted away from you. If you’re really interested you can have a peek inside the linked storage account for the auto/runtime generated tables and queues.

You can read more about the technology powering durable functions here.

Any Constraints?

Yes because the orchestrator functions runs multiple times (replay/checkpoint as mentioned above) it is important that the orchestrator function is deterministic. To put it simply, the code must return the same value for the same input.

  • This means you can’t generate random numbers or guids.
  • If you’re calling remote endpoints those will have to be idempotent.
  • No async/await unless you’re awaiting  on the DurableOrchestrationContext.
  • Non blocking (no I/O or Thread.Sleep).
  • Current date/time should be accessed through CurrentUTCDateTime.

Full list of constraints can be found here.

Closing Remarks

With durable functions, Microsoft has provided us with the ability to easily orchestrate our serverless functions. This will hopefully encourage more people to use Azure Functions to build their compute logic. I highly recommend you have a look at Microsoft Documentation and other use cases (i.e. Fan Out/ Fan in and Async Http) for durable functions.

If you have any thoughts or comments please leave them here as they help me improve.

Using Azure Functions HttpTrigger As Web API

Standard

If you haven’t lived under a rock for the last 18 months you would know ‘Serverless’ is the new cool kid in town. Microsoft’s offer is called Azure Functions while Amazon calls it AWS Lambda.

I won’t go into the various pros and cons of a serverless application in this blog post. There have been various posts written about the nuances of serverless you can Google for. I recommend having a read of Martin Fowler’s post here.

Some Context

I recently worked on a project at Readify that was a POC (proof of concept) mobile app in a team of 4 people. We used Ionic for the app and Azure Functions was chosen for implement the back end web API. We didn’t have a lot of surface area to expose through the API so spinning up an app service with a hosted ASP.NET WebApi project wasn’t deemed necessary. We decided to have the mobile app talk directly to the Azure Function API endpoints. We deliberated on using API Management but ultimately decided not to since it was a POC project with limited scope.

Reasons For Our Technology Choices

  • Ionic (I should do another blog post about the ‘fun’ we had setting up CI/CD using Cordova/Ionic + MacInCloud + HockeyApp)
    We had 10 weeks to develop the app and the client wanted it to be cross platform. We could have gone with a progressive web app but push notifications on iOS wouldn’t have been possible.
  • Serverless
    We had a lot of back end integration pieces that were triggered by certain events. This was a natural fit for what a consumption model of a serverless function provides. Azure functions were chosen because of the team’s experience with it.
  • Http Triggered Azure Function As Web API
    This was perhaps the most contentious choice. We could have easily gone with a full ASP.Net WebApi solution but since this was a POC app and we wanted to see if we could utilize the same consumption model for the API as well. A decision was made to separate the core part of the project as a class library (We used a command/query dispatcher pattern), so pivoting to a asp.net web api project wasn’t hard if required further down the line.
  • API Management
    We initially started with API Management provisioned in Azure but quickly found out that we can’t have it emulated locally so things like the CORS (Cross Origin Resource Sharing) functionality or authentication would not be possible to test if wanted to have the app test/debug against the azure functions running locally. This threw a big wrench in our plans. API management wasn’t cheap either. It didn’t fit with the consumption based costing model we were going with. So we decided to drop it and implement CORS and authentication ourselves.

How Did We Implement The API?

An Azure Function serverless function needs to be very lightweight. We were very careful not to introduce unnecessary complexity.

  • Most requests coming through had a JWT bearer token so we needed a way to decode and construct a proper claims principal. If there was anything wrong with the JWT (ie wrong ‘audience’ or ‘scope’) we would throw a 401 response.
  • All requests needed to support CORS.

The http request needed to go through a layer that handles authentication and another that handles CORS. In an ASP.Net WebAPI application this is handled through the OWIN pipeline. The Auth and CORS middleware inspects and handles the request appropriately and sets the HttpContext accordingly.

It’s a simple enough pattern to follow so we designed our middleware interface based on OWIN.


namespace Functions.Infrastructure
{
    public abstract class HttpMiddleware
    {
        public HttpMiddleware Next;

        protected HttpMiddleware(HttpMiddleware next)
        {
            this.Next = next;
        }

        protected HttpMiddleware()
        {

        }

        public abstract Task InvokeAsync(IHttpFunctionContext context);
    }
}

It’s a simple pattern that works like the russian Matryoshka dolls. Each middleware has the chance to create/augment the response and decide whether or not to call the next middleware. You get a chance to modify the response before and after invoking the next middleware in the pipeline.

We don’t have a HttpContext to work with like we do in WebApi so we rolled out our own.

namespace Functions.Infrastructure
{
    public interface IHttpFunctionContext
    {
        HttpRequestMessage Request { get; }
        HttpResponseMessage Response { get; set; }
        ILogger Logger { get; }
        IUser User { get; set; }
    }
}

IUser is just a ClaimsPrincipal. We could have set the Thread.CurrentPrincipal as well but .NET core doesn’t seem to set it anyway in WebAPI. So why bother right? So far so good.

Just for completeness sake here is the interface and implementation we used to create the pipeline.

namespace Functions.Infrastructure
{
    public interface IMiddlewarePipeline
    {
        void Register(HttpMiddleware middleware);
        Task<HttpResponseMessage> ExecuteAsync(IHttpFunctionContext context);
    }

    public class MiddlewarePipeline : IMiddlewarePipeline
    {
        private readonly List<HttpMiddleware> _pipeline;

        public MiddlewarePipeline(List<HttpMiddleware> pipeline)
        {
            _pipeline = new List<HttpMiddleware>();

            foreach (var httpMiddleware in pipeline)
            {
                Register(httpMiddleware);
            }
        }

        public void Register(HttpMiddleware middleware)
        {
            if (_pipeline.Any())
            {
                _pipeline[_pipeline.Count - 1].Next = middleware;
            }

            _pipeline.Add(middleware);
        }

        public async Task<HttpResponseMessage> ExecuteAsync(IHttpFunctionContext context)
        {
            try
            {
                if (_pipeline.Any())
                {
                    await _pipeline[0].InvokeAsync(context);

                    if (context.Response != null)
                    {
                        return context.Response;
                    }
                }

                throw new MiddlewarePipelineException();
            }
            catch (Exception e)
            {
                context.Logger.Error(e.Message, e);
                return context.Request.CreateErrorResponse(HttpStatusCode.InternalServerError, e);
            }
        }
    }
}

We just Register() the middleware we want and call the ExecuteAsync() with the FunctionContext. Couldn’t be any more simpler right? 🙂

CORS

Let’s look at an example of how we would implement CORS using our middleware now. I’ve allowed any origin here by using *. Use an appropriate value that works for you. (The CORS feature pane in your Azure Function settings might need an entry with just a * as well. The online guidance for this isn’t very clear. We found that putting one entry with a * worked for us)

namespace Functions.Infrastructure.Middleware
{
    public class CorsMiddleware : HttpMiddleware
    {
        private readonly string _allowedVerbs;

        public CorsMiddleware(string allowedVerbs) : base()
        {
            _allowedVerbs = allowedVerbs;
        }

        public override async Task InvokeAsync(IHttpFunctionContext context)
        {
            var response = context.Request.GetCorsResponse(_allowedVerbs);

            if (response == null)
            {
                await this.Next.InvokeAsync(context);

                if (context.Response != null)
                {
                    context.Response = context.Response.EnrichWithCorsOrigin();
                }
            }
            else
            {
                context.Response = response;
            }
        }
    }

    public static class CorsExtensions
    {
        public static HttpResponseMessage GetCorsResponse(this HttpRequestMessage req, string allowedHttpVerbs)
        {
            if (req.Method.Method.ToUpper() == "OPTIONS")
            {
                var response = req.CreateResponse(HttpStatusCode.OK, "Hello from the other side");

                if (req.Headers.Contains("Origin"))
                {
                    response.Headers.Add("Access-Control-Allow-Credentials", "true");
                    response.Headers.Add("Access-Control-Allow-Origin", "*");
                    response.Headers.Add("Access-Control-Allow-Methods", allowedHttpVerbs.ToUpper());
                    response.Headers.Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, Your-Own-Header-Here");
                }

                return response;
            }

            return null;
        }

        public static HttpResponseMessage EnrichWithCorsOrigin(this HttpResponseMessage response)
        {
            response.Headers.Add("Access-Control-Allow-Origin", "*");
            return response;
        }
    }
}

JWT Bearer Token Authentication

Here is an simple example of how you could do bearer token authentication using this middleware concept. We used JWT.NET to implement JWT bearer tokens authentication in our TokenValidator class. I won’t include that code here as it is out of scope. Leave a comment here or email me if you get stuck implementing it.

namespace Functions.Infrastructure.Middleware
{
    public class SecurityMiddleware : HttpMiddleware
    {
        private readonly bool _mustBeAuthenticated;
        private readonly ITokenValidator _tokenValidator;

        public SecurityMiddleware(bool mustBeAuthenticated, ITokenValidator tokenValidator)
        {
            _mustBeAuthenticated = mustBeAuthenticated;
            _tokenValidator = tokenValidator;
        }

        public override async Task InvokeAsync(IHttpFunctionContext context)
        {
            var req = context.Request;
            var log = context.Logger;

            var header = req.Headers
                .FirstOrDefault(q =>
                    q.Key.Equals("Authorization") && q.Value.FirstOrDefault() != null);

            string bearerToken = header.Value == null ? string.Empty : header.Value.FirstOrDefault()?.Substring("Bearer ".Length).Trim(); ;

            if (string.IsNullOrWhiteSpace(bearerToken))
            {
                log.Warning("No bearer token provided");

                if (_mustBeAuthenticated)
                {
                    context.Response = req.CreateErrorResponse(HttpStatusCode.BadRequest, "Bearer token can't be empty");
                    return;
                }

            }
            else
            {
                context.User =  await _tokenValidator.ConstructPrincipal(bearerToken);
            }

            if (Next != null)
            {
                await Next.InvokeAsync(context);
            }
        }
    }
}

I agree it is a fair bit of boilerplate but hang in there with me. 🙂 We will get more into this topic of boilerplate in my learnings section of this post.

We’ve got Auth and CORS working but where is the middleware that actually does the handling of the API call?

Here is an example from an API call that returns the current odometer reading for a car based on a registration number in the JWT claims. Notice how we are assuming the Context.User will be populated? That’s because of the authentication middleware we used before. Is it all starting to make sense now?

public class OdometerHandler : HttpMiddleware
    {
        private readonly IGetOdometerUsingRegoQuery _getOdometerUsingRegoQuery;

        public OdometerHandler(IGetOdometerUsingRegoQuery getOdometerUsingRegoQuery)
        {
            _getOdometerUsingRegoQuery = getOdometerUsingRegoQuery;
        }

        public override async Task InvokeAsync(IHttpFunctionContext context)
        {
            var req = context.Request;
            var log = context.Logger;

            log.Info($"Odometer request received for rego:  '{context.User.Rego}'");
            var odometerDto = await _getOdometerUsingRegoQuery.ExecuteAsync(context.User.Rego);

            if (odometerDto == null)
            {
                context.Response = req.CreateResponse(HttpStatusCode.NoContent);
            }
            else
            {
                context.Response = req.CreateResponse(HttpStatusCode.OK, odometerDto);
            }

        }
    }

This is how we wired it all up. The Ioc.Bootstrap() method doesn’t do anything fancy. It just creates the required middleware instances and the FunctionContext with Request and Logger attached. Just like with OWIN, the order of middleware matters. Note how we allow the OPTIONS http verb because we want it handled within the function logic in our CORS middleware. Leave a comment here if you get stuck and need assistance.

namespace Functions
{
    public static class OdometerReading
    {
        [FunctionName("OdometerReading")]
        public static async Task<HttpResponseMessage> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "GET", "OPTIONS")]HttpRequestMessage req,
            TraceWriter log)
        {
            using (var container = Ioc.Bootstrap(req, log))
            {
                var odometerHandler = container.Resolve<OdometerHandler>();
                var corsMiddleware = container.Resolve<CorsMiddleware>(
                    new NamedParameter("allowedVerbs", "GET, OPTIONS"));
                var securityMiddleware = container.Resolve<SecurityMiddleware>();
                var errorTranslator = container.Resolve<ErrorTranslateMiddleware>();

                IHttpFunctionContext context = container.Resolve<IHttpFunctionContext>();

                var pipeline = container.Resolve<IMiddlewarePipeline>(
                    new NamedParameter("pipeline",
                    new List<HttpMiddleware>(){
                      corsMiddleware,
                      errorTranslator,
                      securityMiddleware,
                      odometerHandler
                }));

                return await pipeline.Execute(context);
            }
        }
    }
}

Our Learnings & Was All That Worth The Effort?

That is the million dollar question. After all we could have gone with ASP.NET WebApi and got most of the boilerplate out of the box. Yes I do mean IOC here as well.

We wanted to challenge ourselves to see if we could do this POC app using a pay for consumption model. AppService hosted WebApi doesn’t fit that mould entirely. Nor does Azure Api Management yet. This biggest issue with API Management though is that tooling isn’t there for developers to emulate it when running your azure function locally against the mobile/web app. For a project that has many API endpoints, there is no easy way to setup API management easily. (ie We can use Swashbuckle when using ASP.NET WebApi to get a swagger file and import in API management but no such feature exists for Azure Functions http triggers).

What about AWS Lambda?

AWS Lambda supports hosting ASP.NET WebApi projects as a severless app. I haven’t used this feature before but we would have chosen that option if it were available for us in Azure Functions. It solves the problem of boilerplate. It’s also interesting to note that at the time of development of this app Azure Functions didn’t support .NET core but Lambda did.


Conclusions (Confused?)

Don’t be. We went with the approach of serverless consumption model first but ended up having to implement our own OWIN like middleware and do auth and CORS our selves. It didn’t add any significant startup or performance cost to the app but if the project boilerplate was any larger it could have.

Before you start to implement a web api using Azure Functions http triggers consider the following

  • Do you need authentication and authorization?
    If you have a lot of bespoke requirements around this I would recommend going with ASP.NET WebApi as it is more mature and has the boilerplate to handle them. Azure Functions can be secured using Azure AD but I haven’t used that feature.
  • Do you have to worry about CORS? Yes you do.
    Microsoft has changed the way CORS rules work in the platform settings in Azure Functions. Make sure the rules set work as you intended. I can’t stress this enough. Even if this is the case you will have to do some magic (implement your own CORS logic) in the functions app to handle situations where you are running it locally against the mobile/web app.
  • Do your APIs need to scale? Do you want to worry about load balancing and availability?
    You can’t beat the scalability aspects of serverless apps. You do sacrifice certain things for that luxury though. (ie Cold starts)
  • Do you need to have other developers use your API? Need rate control?
    If you do then API Management is a must. API Management also allows you to authenticate users using Azure AD too.
  • Microservices you say
    Yes this is where the serverless paradigm shines through. Be careful though. Azure functions work great with its bindings/triggers but they are geared towards cloud platforms. If you need to write to a SQL database, or worse call a SOAP endpoint you will have to do it the old fashioned way. We ended up using TableStorage, Storage Queues and Blobs. Any additional integration points were implemented using Logic Apps. This is a great combination you can use.

Final Words

For most of us this was the first time we built a mobile app paired with a serverless web api. I learned a lot about the capabilities of Azure Functions and areas where ASP.NET WebApi outshines it. Your mileage may vary but keep these considerations and experiences in your mind then next time you are presented with the opportunity to implement a web api using Azure Functions or AWS Lambda.

Thoughts? Comments? Please let me know below.

Event Sourcing Examined Part 3 Of 3

Standard

In this 3 part series we will look at what event sourcing is and why enterprise software for many established industries use this pattern.

Index

1. Part One

  • Introduction to Event Sourcing
  • Why Use Event Sourcing?
  • Some Common Pitfalls

2. Part Two 

  • Getting Familiar With Aggregates
  • Event Sourcing Workflow
  • Commands
  • Domain Event
  • Internal Event Handler
  • Repository
  • Storage & Snapshots
  • Event Publisher

3. Part Three (This one)

  • CQRS
  • Read Side of CQRS
  • Using NEventLite

Event Sourcing as demonstrated in the previous blog posts is a GREAT way to implement the command side of CQRS. In this post, we will be looking at how to implement the read of CQRS and using my event sourcing “framework” NEventLite.

CQRS

Command Query Responsibility Segregation is an idea where we can model our solution to have separate write and read models. The idea was championed by Greg Young in the mid 2000s and since been adopted by many. It has its roots in CQS (Command Query Separation).

CQRS is hard to implement properly but has a lot of advantages like scalability for performance critical applications. As most information systems have magnitudes more reads than writes it makes more sense to have separate models to cater for specific concerns of read and write.

Have a look at Martin Fowler’s explanation here.

Read Side of CQRS

In the event sourcing methodology all we persist are domain events. If we don’t have a separate read model/view we would have to read all the past events to do an aggregation or to find a count. This is bad. Remember how CQRS dictates different models for write and read? Good. You can probably see where I’m going with this now. What we need is an optimised read-only model/view.

In the example of the Purchase Order we might want to show the customer previous purchase orders and the totals. Also, a warehouse manager might want to see the products and orders pending. Part of DDD is to identify these needs. One approach I’ve used is to create a separate read model for each page/grid/widget. This allows us to reuse views more often.

These read side event handlers then subscribe to domain events and updates read models as necessary. The update logic is self-contained and isolated from other views. This means we can have as many views as we like listening to domain event broadcasts. This gives us the ability to scale different views as required. (When I use the word view here I am talking about the read model + the event handler)

Let’s look at the sequence of events.

At start up,

  • PurchaseOrderSummaryView subscribes to PurchaseOrderCreated, Updated events
  • ProductOrderSummaryView subscribes to PurchaseOrderItemAdded, Updated events

When customer places an order,

  1. Domain event created
    • Persisted to the event store (Artefact of the command side)
    • Broadcasted to the message queue
  2. External Event handler receives an event (message) it subscribed to (I use the term internal event handler to describe the event handler inside the AggregateRoot)
  3. External Event handler reads the message and updates the read model to reflect the changes.
    • Optionally: Persist the last event version to the view as we can later use this to determine the “freshness” of the view.

Using NEventLite

NEventLite is a lightweight framework I created to do the mundane tasks associated with setting up an event sourced system in .net. Think of it as event sourcing on rails.

We are going to look at the examples provided in my github repo. We have very small project that demonstrates use of ES with snapshots and a read model.

Look at the sample Note domain model.

    public class Note : AggregateRoot, ISnapshottable
    {
        public DateTime CreatedDate { get; private set; }

        public string Title { get; private set; }

        public string Description { get; private set; }

        public string Category { get; private set; }

        #region "Constructor and Methods"

        public Note()
        {
            //Important: Aggregate roots must have a parameterless constructor
            //to make it easier to construct from scratch.

            //The very first event in an aggregate is the creation event
            //which will be applied to an empty object created via this constructor
        }

        //The following method are how external command interact with our aggregate
        //A command will result in following methods being executed and resulting events will be fired

        public Note(Guid id, string title, string desc, string cat):this()
        {
            //Pattern: Create the event and call ApplyEvent(Event)
            ApplyEvent(new NoteCreatedEvent(id, CurrentVersion, title, desc, cat, DateTime.Now));
        }

        public void ChangeTitle(string newTitle)
        {
            //Pattern: Create the event and call ApplyEvent(Event)
            if (this.Title != newTitle)
            {
                ApplyEvent(new NoteTitleChangedEvent(Id, CurrentVersion, newTitle));
            }
        }

//... see github repo example sample for full source code

As you can see we are following the pattern of creating an event and applying it. The internal event handlers are marked with a custom attribute and some “reflection magic” by the framework knows to call the right one when the ApplyEvent method is called.

        [InternalEventHandler]
        public void OnNoteCreated(NoteCreatedEvent @event)
        {
            CreatedDate = @event.CreatedTime;
            Title = @event.Title;
            Description = @event.Desc;
            Category = @event.Cat;
        }

        [InternalEventHandler]
        public void OnTitleChanged(NoteTitleChangedEvent @event)
        {
            Title = @event.Title;
        }

The Note class also implements ISnapshottable which adds these two methods allowing it to re-hydrate itself faster. (See part 2 of the blog series) .

        public NEventLite.Snapshot.Snapshot TakeSnapshot()
        {
            //This method returns a snapshot which will be used to reconstruct the state

            return new NoteSnapshot(Guid.NewGuid(),
                                    Id,
                                    CurrentVersion,
                                    CreatedDate,
                                    Title,
                                    Description,
                                    Category);
        }

        public void ApplySnapshot(NEventLite.Snapshot.Snapshot snapshot)
        {
            //Important: State changes are done here.
            //Make sure you set the CurrentVersion and LastCommittedVersions here too

            NoteSnapshot item = (NoteSnapshot)snapshot;

            Id = item.AggregateId;
            CurrentVersion = item.Version;
            LastCommittedVersion = item.Version;
            CreatedDate = item.CreatedDate;
            Title = item.Title;
            Description = item.Description;
            Category = item.Category;
        }

Once the framework applies the event, it publishes the event using the IEventPublisher. In the example I’m using an in-memory subscriber and injecting it in. (This isn’t how you would use it in production)

    public class MyEventPublisher : IEventPublisher
    {
        private readonly MyEventSubscriber _subscriber;

        public MyEventPublisher(MyEventSubscriber subscriber)
        {
            LogManager.Log("EventPublisher Started...", LogSeverity.Information);
            _subscriber = subscriber;
        }

        public async Task PublishAsync(IEvent @event)
        {
            LogManager.Log(
                $"Event #{@event.TargetVersion + 1} Published: {@event.GetType().Name} @ {DateTime.Now.ToLongTimeString()}",
                LogSeverity.Information);

            await Task.Run(() =>;
            {
                InvokeSubscriber(@event);

            }).ConfigureAwait(false);
        }

        private void InvokeSubscriber<T>(T @event) where T : IEvent
        {
            var o = _subscriber.GetType().GetMethodsBySig(typeof(Task), null, true, @event.GetType()).First();
            o.Invoke(_subscriber, new object[] { @event });
        }
    }

Then the in-memory subscriber listens to the events and handles them. The class implements IEventHandler for multiple events. You can have multiple subscribers handling the same events pretty easily if you want to. Just do the required code changes to allow the publisher to find them via reflection.

In a production scenario you will publish it to a message broker and the subscribers will get it from there. Message broker approach will give you capabilities like delivery guarantees when required.

public class MyEventSubscriber : IEventHandler<NoteCreatedEvent>,
                                    IEventHandler<NoteTitleChangedEvent>,
                                    IEventHandler<NoteDescriptionChangedEvent>,
                                    IEventHandler<NoteCategoryChangedEvent>
    {

        private readonly MyReadRepository _repository;

        public MyEventSubscriber(MyReadRepository repository)
        {
            _repository = repository;
        }

public async Task HandleEventAsync(NoteCreatedEvent @event)
        {
            LogEvent(@event);

            _repository.AddNote(new NoteReadModel(@event.AggregateId, @event.CreatedTime, @event.Title, @event.Desc, @event.Cat));
        }

        public async Task HandleEventAsync(NoteTitleChangedEvent @event)
        {
            LogEvent(@event);

            var note = _repository.GetNote(@event.AggregateId);
            note.CurrentVersion = @event.TargetVersion + 1;
            note.Title = @event.Title;

            _repository.SaveNote(note);
        }

//See github repo for full source code

There is a very simple read model in the example.


    public class NoteReadModel
    {

        public Guid Id { get; private set; }
        public int CurrentVersion { get; set; }
        public DateTime CreatedDate { get; set; }
        public string Title { get; set; }
        public string Description { get; set; }
        public string Category { get; set; }

        public NoteReadModel(Guid id, DateTime createdDate, string title, string description, string category)
        {
            Id = id;
            CreatedDate = createdDate;
            Title = title;
            Description = description;
            Category = category;
        }
    }

Finally the read model has its own repository to allow consuming pages/widgets to access it. This is optional.

public class MyReadRepository
    {
        private readonly MyInMemoryReadModelStorage _storage;

        public MyReadRepository(MyInMemoryReadModelStorage storage)
        {
            _storage = storage;
        }

        public void AddNote(NoteReadModel note)
        {
            _storage.AddOrUpdate(note);
        }

        public void SaveNote(NoteReadModel note)
        {
            _storage.AddOrUpdate(note);
        }

The read models are persisted to a file in this example but you can use any form of persistence that best suits your needs. I recommend a NOSQL document store as most read models will end up being queried by their ID. You can use a relational model if you intend to have some more querying abilities. Do as you see fit.


    public class MyInMemoryReadModelStorage
    {
        private readonly string _memoryDumpFile;
        private readonly List<NoteReadModel> _allNotes = new List<NoteReadModel>();

        public MyInMemoryReadModelStorage(string memoryDumpFile)
        {
            _memoryDumpFile = memoryDumpFile;

            if (File.Exists(_memoryDumpFile))
            {
                _allNotes = SerializerHelper.LoadListFromFile<NoteReadModel>(_memoryDumpFile);
            }
        }

        public void AddOrUpdate(NoteReadModel note)
        {
            if (_allNotes.Any(o => o.Id == note.Id))
            {
                _allNotes.Remove(_allNotes.Single((o => o.Id == note.Id)));
                _allNotes.Add(note);
            }
            else
            {
                _allNotes.Add(note);
            }

            SerializerHelper.SaveListToFile<NoteReadModel>(_memoryDumpFile, _allNotes);
        }

        public NoteReadModel GetByID(Guid Id)
        {
            return _allNotes.FirstOrDefault(o => o.Id == Id);
        }

        public IEnumerable<NoteReadModel> GetAll()
        {
            return _allNotes.ToList();
        }
    }

Run the example (Full source code is at https://github.com/dasiths/NEventLite/tree/master/src/Examples/NEventLite%20Example)

You should see something like this confirming that real model and one constructed using replaying events have the same info.

Untitled

Wrapping Up

In this series we looked at Event Sourcing with Command Query Responsibility Segregation in depth and looked at an example of how to implement it using NEventLite. For further reading and learning I suggest following Greg Young’s blog and the CQRS google group.

Thanks for reading and please leave your comments and feedback below.

REST + AngularJS Basics With .Net Core WebAPI CRUD Example

Standard

Yes, I have been MIA for a while. I have been on holidays for an extended period of time and started working for a software consultancy a few weeks ago. So my busy schedule hasn’t given me an opportunity to blog.

Why Front End, Dasith?

I’m getting my hands dirty with with AngularJS for a brownfield project and wanted to write some material to help other team members who might not have been exposed to Angular before. So I took that opportunity to write it in a form of a blog post. I know front end tech is not my focus, but as a full stack developer it is important you know at least one major JavaScript mv* framework. It might even come handy when you need to whip up a quick prototype to demonstrate a back end feature to a customer. Bottom line is that we back end developers can’t ignore the front end technologies.

I’m using Angular 1.6 for this because it was a brownfield project and the existing code base was in 1.*. But if you are starting a new project I highly recommend you familiarize yourself with Typescript and Angular 2. (Strongly typed master race is a thing right?)


So let’s jump right in. It a prerequisite of this post to have an understand of the building blocks of an Angular app. You should have a basic understanding of a module, controller, services, data binding, filtering and directives. A good start is the Angular docs https://docs.angularjs.org/guide.

I have uploaded the code covered in the article into a github repo at https://github.com/dasiths/AngularJsWebApiTest and will be doing any feature additions and bug fixes there. Consider this post as a guide only.

Module.js

// Define the `booksApp` module
var app = angular.module('booksApp', []);

This is where we define the name of the module and have it assign to a global variable for later use. Nothing fancy here.

 Service.js

app.service('crudService', function ($http) {

 var baseUrl = 'http://localhost:50829';

 //Create new record
 this.post = function (Book) {
 var request = $http({
 method: "post",
 url: baseUrl + "/api/books",
 data: Book
 });
 return request;
 }

 //Get Single Records
 this.get = function (Id) {
 return $http.get(baseUrl + "/api/books/" + Id);
 }

 //Get All Books
 this.getAllBooks = function () {
 return $http.get(baseUrl + "/api/books");
 }

 //Update the Record
 this.put = function (Id, Book) {
 var request = $http({
 method: "put",
 url: baseUrl + "/api/books/" + Id,
 data: Book
 });
 return request;
 }

 //Delete the Record
 this.delete = function (Id) {
 var request = $http({
 method: "delete",
 url: baseUrl + "/api/books/" + Id
 });
 return request;
 }
});

We are defining the functions (they are actually called “promises”) here. Each of these functions handles a CRUD operation and points an end point. By putting these functions inside the service we are separating the concerns from the controller. We can later inject this service in to the controller. Have a read here https://www.sitepoint.com/tidy-angular-controllers-factories-services/.

Controller.js

app.controller('BookListController', function BookListController($scope, $http, crudService) {
 $scope.statusClass = 'label-info';
 $scope.status = 'Loading books...';
 $scope.IsNewRecord = 1; //The flag for the new record

 loadAllBooks();

 function loadAllBooks() {

 var promise = crudService.getAllBooks();

 promise.then(
 function (response) {
 $scope.Books = response.data;
 $scope.status = 'Loaded. Code: ' + response.status;
 $scope.statusClass = 'label-success';
 },
 function (error) {
 $scope.Books = null;
 $scope.status = 'Error: ' + error;
 $scope.statusClass = 'label-warning';
 $log.error('failure loading Books', error);
 });
 }

 //Method to save and add
 $scope.save = function () {
 var Book = {
 Id: $scope.BookId,
 Name: $scope.BookName,
 Author: $scope.BookAuthor
 };

 //If the flag is 1 the it is a new record
 if ($scope.IsNewRecord === 1) {
 var promisePost = crudService.post(Book);
 promisePost.then(function (result) {
 $scope.BookId = result.data;
 loadAllBooks();
 }, function (error) {
 $scope.status = 'Error: ' + error;
 console.log("Err" + error);
 });
 } else { //Else Edit the record
 var promisePut = crudService.put($scope.BookId, Book);
 promisePut.then(function (result) {
 $scope.status = "Updated Successfuly";
 $scope.statusClass = 'label-success';
 loadAllBooks();
 }, function (error) {
 $scope.status = 'Error: ' + error;
 $scope.statusClass = 'label-warning';
 console.log("Err" + error);
 });
 }

 }

 //Method to Delete
 $scope.delete = function () {
 var promiseDelete = crudService.delete($scope.BookId);
 promiseDelete.then(function (result) {
 $scope.status = "Deleted Successfuly";
 $scope.statusClass = 'label-success';
 $scope.BookId = 0;
 $scope.BookName = "";
 $scope.BookAuthor = "";

 loadAllBooks();
 }, function (error) {
 $scope.status = 'Error: ' + error;
 $scope.statusClass = 'label-warning';
 console.log("Err" + error);
 });
 }

 //Method to Get Single Book
 $scope.get = function (Book) {
 var promiseGetSingle = crudService.get(Book.id);

 promiseGetSingle.then(function (result) {
 var res = result.data;
 $scope.BookId = res.id;
 $scope.BookName = res.name;
 $scope.BookAuthor = res.author;

 $scope.IsNewRecord = 0;
 },
 function (errorPl) {
 console.log('failure loading Employee', errorPl);
 });
 }

 //Clear the Scope models
 $scope.clear = function () {
 $scope.statusClass = 'label-info';
 $scope.status = "";
 $scope.IsNewRecord = 1;
 $scope.BookId = 0;
 $scope.BookName = "";
 $scope.BookAuthor = "";
 }

}).directive('myBooktableheader', function () {
 return {
 templateUrl: 'http://localhost:51836/BookTemplate.html'
 };
});

Here we have the controller that coordinates everything. As we discussed earlier notice how the crudService is injected in. Dependency Injection is a major part of how Angular works. Read more here https://docs.angularjs.org/guide/di.

Everything here is pretty straight forward. We just created methods to load and save books. We have some fields in the scope that we use to bind the input fields to ($scope.BookId, BookName, BookAuthor). Read more here https://www.w3schools.com/angular/angular_databinding.asp.

To demonstrate directives, I have also added a custom directive definition right at the end of the controller which has an external template.

BookTemplate.html

<th>Id</th>
<th>Name</th>
<th>Author</th>
<th></th>

Directives are a big topic onto itself. We are covering the very basic use here for a trivial example. Read more about directives here https://docs.angularjs.org/guide/directive.

Book.cshtml (partial code example)

See full code at https://github.com/dasiths/AngularJsWebApiTest/blob/master/AngularWebApplication/Views/Book/Index.cshtml.

<body ng-controller="BookListController">
<div class="body-content">
<h3>@ViewData["Message"]</h3>
<table>
<tr>
<td>
<table id="searchObjResults" class="table-striped table-hover" width="600">
<tr my-Booktableheader></tr>
<tr ng-repeat="book in Books | filter:search:strict">
<td>{{book.id}}</td>
<td>{{book.name}}</td>
<td>{{book.author}}</td>
<td>
 <button type="button" class="btn" ng-click="get(book)">
 <span class="glyphicon glyphicon-edit"></span>
 </button></td>
</tr>
... see github for repo for full code

 <label ng-class="statusClass">Status: {{status}}</label>

Use this area to filter results.

 <label>Any: <input ng-model="search.$"></label>

 <label>Name only <input ng-model="search.name"></label>

 <label>Author only <input ng-model="search.author"></label>

 <label>Equality <input type="checkbox" ng-model="strict"></label>

This will be our template for the main web page. Take note of how we use data binding using one-way and two-way data binding directives. More here https://www.w3schools.com/angular/angular_databinding.asp.

The other thing to note here is the filtering. We have some typical examples of filter use covered in the example. Read more about filters here https://www.w3schools.com/angular/angular_filters.asp.

Well that’s it for the Client side of the application. Now let’s quickly look at the WebAPI end point.

I’m using Visual Studio 2017 to create my project. I created a project using the ASP.NET CORE web api template.

BookRepository.cs

 public class BookRepository
 {
 private List<Book> _books = new List<Book>();
 public BookRepository()
 {
 _books.Add(new Book() { Id = 1, Name = "P=NP in .NET", Author = "Jon Skeet" });
 _books.Add(new Book() { Id = 2, Name = "Dank Memes", Author = "Dasith Wijes" });
 _books.Add(new Book() { Id = 3, Name = "50 'Shards' of Azure SQL", Author = "Actor Model" });
 _books.Add(new Book() { Id = 4, Name = "Jailbreaking iOS to Set a Default Web Browser", Author = "Cult Of Fandroid" });
 _books.Add(new Book() { Id = 5, Name = "OCD Olympics", Author = "Also Me" });
 }

 public IEnumerable<Book> GetAllBooks()
 {
 return _books.ToArray();
 }

 public void UpdateBook(Book b)
 {
 _books[_books.IndexOf(_books.Single(o => o.Id == b.Id))] = b;
 }

 public int AddBook(Book b)
 {
 b.Id = _books.Max(o=> o.Id) + 1;
 _books.Add(b);

 return b.Id;
 }

 public void DeleteBook(int id)
 {
 _books.Remove(_books.Single(o => o.Id == id));
 }
 }

Startup.cs

I enabled CORS as follows in startup.cs because our Angular app will be communicating from a different “host”. (The angular app runs as a separate application on a separate port to the webapi.) Also I added the repository as a singleton to the IOC container.

public class Startup
{
public Startup(IHostingEnvironment env)
{
var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
.AddEnvironmentVariables();
Configuration = builder.Build();
}

public IConfigurationRoot Configuration { get; }

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddCors();
services.AddMvc();

//Add our repository as a singleton
services.AddSingleton<Repository.BookRepository>();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();

app.UseCors(builder => builder.WithOrigins("*")
.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader());

app.UseMvc();
}
}

 BookController.cs

This is your stock standard web api controller. I’ve put some effort into wrapping my results with the proper http status codes and handling some errors. This is by no means perfect but just note how the web api communicates back via both the status code and content.

[Route("api/[controller]")]
public class BooksController : Controller
{

BookRepository _repo;

public BooksController(BookRepository repo)
{
_repo = repo;
}

// GET api/books
[HttpGet]
public IActionResult Get()
{
var result = _repo.GetAllBooks();

if (result.Count() > 0)
return Ok(result);
else
return NoContent();
}

// GET api/books/id
[HttpGet("{id}")]
public IActionResult Get(int id)
{
var result = _repo.GetAllBooks().SingleOrDefault(o => o.Id == id);

if (result == null)
return NoContent();
else
return Ok(result);
}

// GET api/books/q=query
[HttpGet("q={name}")]
public IActionResult Get(string name)
{
var results = _repo.GetAllBooks().Where(
o => o.Name.IndexOf(name, 0, StringComparison.OrdinalIgnoreCase) >= 0);

if (results.Count() > 0)
{
return Ok(results);
}
else
{
return NoContent();
}

}

// POST api/books
[HttpPost]
public IActionResult Post([FromBody] Book b)
{
if (ModelState.IsValid == false)
return BadRequest();

try
{
return Ok(_repo.AddBook(b));
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}

}

// PUT api/books/id
[HttpPut("{id}")]
public IActionResult Put(int id, [FromBody] Book b)
{

if (ModelState.IsValid == false || id != b.Id)
return BadRequest();

var result = _repo.GetAllBooks().SingleOrDefault(o => o.Id == id);
if (result == null)
return NotFound();

try
{
_repo.UpdateBook(b);
return Ok(id);
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}

// DELETE api/books/id
[HttpDelete("{id}")]
public IActionResult Delete(int id)
{
try
{
_repo.DeleteBook(id);
return Ok(id);
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}

}

That’s it. We have covered the major components in the project. Run it and see if you can have a play with it.

AngularWebAPI

Good luck with your learning. I recommend you learn more about a topic like token authentication using Angular + webapi because it will be something you will end up doing often. Another important area we didn’t cover here is routing.

Please post your comments and ideas here as they help me improve my knowledge and blogging skills. Thank you.