Building a Logging Microservice

I’m starting to build an application using a microservice architecture and thought it would be worth-while to document my journey. This first post is about building a logging microservice that will be used by all other services in the final solution.

Requirements

The requirements are actually quite simple.

As a microservice
I want to log a message
So that I can trace what is happening within my application

As a microservice
I want to log an exception
So that I can debug what went wrong within my application

That is really it. I’m going to use a 3rd party logging service that will provide me with a nice UI for visualizing my logs and so I don’t have any requirements to fetch data. This makes for a nice and easy start to get used to building microservices.

Resources

I’m going to use a REST based api. The two resources that can be extracted from the requirements are a Message and an Exception. The properties are as follows:

Message

  • Level - An enumeration that can be ‘debug’, ‘info’, ‘warn’
  • Content - The content of the message
  • Source - The origin of the message
  • Environment - The application’s environment (dev/test/production/etc)
  • Region - The location of the application (I’m deploying in various Azure regions)
  • Tags - A string array of anything you’d like to tag the message with
  • Metadata - Key/value pairs that you’d like to add to the message
  • Timestamp - When the message was created

Exception

  • Message - The exception’s message
  • Source - The origin of the exception
  • Type - The type of the exception
  • StackTrace - The exception’s stack trace
  • Environment - The application’s environment (dev/test/production/etc)
  • Region - The location of the application (I’m deploying in various Azure regions)
  • Tags - A string array of anything you’d like to tag the exception with
  • Metadata - Key/value pairs that you’d like to add to the exception
  • Timestamp - When the exception was created

URIs

Now that we have our resources defined we can infer the URIs that we will use. In this particular case I’m going to implement versioning by embedding it in the url of the resource locations so that gives me the following URIs. POST {base url}/v1/messages to create a message POST {base url}/v1/exceptions to create an exception

Status Codes

This microservice will be split into two parts. The api is only responsible for validating the input and placing the request onto a message queue. Because there is no temporal coupling between the api and the queue processing we will simply return the 202 - Accepted status code if the request is valid. Otherwise we’ll return a 400 Bad Request status along with the details of what’s wrong with the request.

Message Format

I’m going to use JSON as the message format from the client as well as the message going into the message queue. This dictates that clients need to send the Content-Type: application/json header with each request and the api will need to use a json formatter when it attempts to do it’s data binding.

Versioning

As previously mentioned we’re going to version based on the url and on the api side we’ll match the version in our route constraints. If/when we need to add new versions we’ll just have to add new action methods on our api controller with the updated route constraints.

Design Approach

There are two parts to the service, the API and the Background Job. The responsibility of the API is simply to validate that the request is good based on our api definition and put the request on a message queue. If all goes well we’ll return a 202 - Accepted status to the caller. The second part, the Background Job, will pull messages off of the message queue and send them to our 3rd party logging service for long term storage.

Api

We’re going to build the api as a .net core web api being as minimalistic as is pragmatic.

Prerequisites

There are a couple of things that I don’t want to cover in detail that you need to do in order to get started.

  1. You need to setup a service bus resource on Azure and have your connection string readily accessible. There is very good documentation on Microsoft’s site that you can search for if you don’t know how to do this. In addition to the service bus itself you need to create two queues named message-log and exception-log
  2. Hopefully by the time you read this there will be a proper NuGet package for Microsoft.Azure.ServiceBus but in case it’s not there yet you will need to build and pack your own NuGet package from the source at https://github.com/Azure/azure-service-bus-dotnet

Getting Started

From Visual Studio 2017 create a new empty Asp.Net Core 1.1 project

Next you want to add your NuGet packages that will be needed for the project. Make special note that we are using Microsoft.AspNetCore.Mvc.Core and not the full MVC package. This is important because we don’t need all of the UI bits like the razor view engine for our microservice. Your NuGet packages that are installed should look like this. Also note that the Microsoft.Azure.ServiceBus package is the one you have to build yourself (see step #2 of the prerequisites) unless it’s now available officially on NuGet.

Program.cs

There isn’t much that we have to do in Program.cs other than enable IIS integration so that we can host this api behind IIS proper when we deploy to production (or use IIS Express while we are developing)

using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace Api.Host
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .Build();

            host.Run();
        }
    }
}

Startup.cs

Things start to take shape in Startup.cs. Here we handle setting up our configuration, we configure our service to use the core components of asp.net mvc along with specifying that we want to use JSON as our format. Additionally we create a single instance of our logger (which we will show later) and register it in the IoC container.

using System.IO;
using Api.Infrastructure;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace Api.Host
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var config = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile(@".\Host\appsettings.json");

            Configuration = config.Build();
        }
        public IConfiguration Configuration { get; set; }

        public void ConfigureServices(IServiceCollection services)
        {
            services
               .AddMvcCore()
               .AddJsonFormatters();

            // create the AzureLoggingMessage queue instance
            var logger = new AzureMessageQueue(connectionString: Configuration["ServiceBusConnectionString"]);

            // register the logger instance as a singleton in the ioc container
            services.AddSingleton<IMessageQueue>(logger);
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            app.UseMvc();
        }
    }
}

appsettings.json

The appsettings.json is our configuration file where we hold the connection string to the service bus that we are using (created from step #1 of the prerequisites)

{
  "ServiceBusConnectionString": "<your service bus connection string>"
}

IMessageQueue.cs

We are going to define an interface named IMessageQueue that will represent a service that can queue a message. For this implementation we are going to use Azure Service Bus queues but you can use any queue service you like. As you can see we are very explicit about logging a message vs an exception. I’m doing this b/c I want the different types to go to different queues so that I can process them separately in the background. If we used only a single queue then there would have to be more logic in the background job to inspect the queued message to determine if it was an exception or a log message (b/c the 3rd party service api that we use requires them to be treated differently). Also by using two queues we ensure that our (hopefully) less infrequent exceptions will get logged as quickly as possible so that we’ll get alerted sooner vs them being intermingled with thousands of log messages.

using System.Threading.Tasks;
using Newtonsoft.Json.Linq;

namespace Api.Infrastructure
{
    public interface IMessageQueue
    {
        Task QueueMessageAsync(JObject message);
        Task QueueExceptionAsync(JObject exception);
    }
}

AzureMessageQueue.cs

Our implementation of IMessageQueue for Azure is as follows. This is the largest class in the application as it deals with setting up the azure queue clients and preparing brokered messages. We are maintaining a queue client per queue so here we have one for the messages and one for the exceptions.


using System;
using System.Threading.Tasks;
using Microsoft.Azure.ServiceBus;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;

namespace Api.Infrastructure
{
    internal sealed class AzureMessageQueue : IMessageQueue
    {
        private static QueueClient _messageQueueClient;
        private static QueueClient _exceptionQueueClient;

        private const string MessageQueueName = "message-log";
        private const string ExceptionQueueName = "exception-log";

        public AzureMessageQueue(string connectionString)
        {
            // make sure we get a valid set of parameters
            if (string.IsNullOrWhiteSpace(connectionString))
            {
                throw new Exception($"{nameof(connectionString)} cannot be blank.");
            }

            // create the connection string along with our entity path (queue name)
            var messageQueue = new ServiceBusConnectionStringBuilder(connectionString)
            {
                EntityPath = MessageQueueName
            };

            var exceptionQueue = new ServiceBusConnectionStringBuilder(connectionString)
            {
                EntityPath = ExceptionQueueName
            };

            // create the queue client
            _messageQueueClient = QueueClient.CreateFromConnectionString(messageQueue.ToString());
            _exceptionQueueClient = QueueClient.CreateFromConnectionString(exceptionQueue.ToString());
        }
        public async Task QueueMessageAsync(JObject message)
        {
            // create a brokered message to put in the queue
            var brokeredMessage = CreateBrokeredMessage(message);

            // send the message to the queue
            await _messageQueueClient.SendAsync(brokeredMessage);
        }

        public async Task QueueExceptionAsync(JObject exception)
        {
            // create a brokered message to put in the queue
            var brokeredMessage = CreateBrokeredMessage(exception);

             // send the message to the queue
            await _exceptionQueueClient.SendAsync(brokeredMessage);
        }

        private static BrokeredMessage CreateBrokeredMessage(JObject data)
        {
            // serialize the data as json
            var json = JsonConvert.SerializeObject(data);

            // create a new brokered message to send to the queue
            return new BrokeredMessage(json);
        }
    }
}

LoggingController.cs

The api controllers themselves are responsible for ensuring the we got a request from the api client, that the request is valid according to it’s schema, and queueing the message. Usually I would create Models that would represent the Message and Exception but because all we are doing is validating that we have good input values there really is no need to define a model class, use Fluent Validation to create validators by defining business rules, etc. The overhead of converting the json request to a Model class during data-binding, running Fluent Validation and re-serializing the Model class back to json to put in the queue is a lot of overhead. Instead we just use json schema to define those same rules and we can alleviate the need to define extra classes and do extra serialization/deserialization.

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Api.Infrastructure;
using Api.JsonSchemas;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json.Linq;
using Newtonsoft.Json.Schema;

namespace Api.Controllers
{
    public class LoggingController : ControllerBase
    {
        private readonly IMessageQueue _loggingMessageQueue;

        public LoggingController(IMessageQueue loggingMessageQueue)
        {
            _loggingMessageQueue = loggingMessageQueue ??
                throw new Exception($"{nameof(loggingMessageQueue)} cannot be null.");
        }

        [HttpGet("/")]
        public IActionResult Get()
        {
            return Ok("Logging Microservice");
        }

        [HttpPost("/messages")]
        public async Task<IActionResult> QueueMessage([FromBody] JObject message)
        {
            try
            {
                // ensure we received a request
                if (null == message)
                {
                    return BadRequest("The request is null and cannot be processed.");
                }

                // validate the message
                var valid = ValidateSchema(message, Schemas.Message, out string errors);

                if (!valid)
                {
                    return BadRequest(errors);
                }

                // queue the message
                await _loggingMessageQueue.QueueMessageAsync(message);

                // return a 202 Accepted response b/c we don't know when the message will be logged
                return Accepted();
            }
            catch (Exception e)
            {
                return StatusCode(500, e.Message);
            }
        }

        [HttpPost("/exceptions")]
        public async Task<IActionResult> QueueException([FromBody] JObject exception)
        {
            try
            {
                // ensure we received a request
                if (null == exception)
                {
                    return BadRequest("The request is null and cannot be processed.");
                }

                // validate the message
                var valid = ValidateSchema(exception, Schemas.Message, out string errors);

                if (!valid)
                {
                    return BadRequest(errors);
                }

                // queue the exception
                await _loggingMessageQueue.QueueExceptionAsync(exception);

                // return a 202 Accepted response b/c we don't know when the exception will be logged
                return Accepted();
            }
            catch (Exception e)
            {
                return StatusCode(500, e.Message);
            }
        }

        private static bool ValidateSchema(JToken data, string schema, out string errors)
        {
            // initialize out param
            errors = string.Empty;

            // parse the schema
            var parsedSchema = JSchema.Parse(schema);

            // valdiate the json data
            IList<string> messages;
            var valid = data.IsValid(parsedSchema, out messages);

            // if the json is invalid lets capture the errors
            if (!valid)
            {
                errors = string.Join(Environment.NewLine, messages);
            }

            return valid;
        }
    }
}

Schemas.cs

The schemas that are used to validate the requests are as follows. These are simple to create by using LinqPad and Newtonsoft.Json.Schema.SchemaGenerator

namespace Api.JsonSchemas
{
    internal static class Schemas
    {
        internal static string Message = @"
        {
          'id': 'Message',
          'type': 'object',
          'properties': {
            'level': {
              'type': 'string',
              'enum': [
                'Debug',
                'Info',
                'Warning',
                'Error'
              ]
            },
            'content': {
              'type': 'string',
              'maxLength': 10000
            },
            'source': {
              'type': 'string'
            },
            'environment': {
              'type': 'string'
            },
            'region': {
              'type': 'string'
            },
            'tags': {
              'id': 'String[]',
              'type': [
                'array',
                'null'
              ],
              'items': {
                'type': [
                  'string',
                  'null'
                ]
              }
            },
            'timestamp': {
              'type': 'string'
            },
            'metadata': {
              'type': [
                'object',
                'null'
              ],
              'additionalProperties': {
                'type': [
                  'string',
                  'null'
                ]
              }
            }
          },
          'required': [
            'level',
            'content',
            'source',
            'environment',
            'region',
            'timestamp'
          ]
        }";

        internal static string Exception = @"
        {
          'id': 'Exception',
          'type': 'object',
          'properties': {
            'message': {
              'type': 'string'
            },
            'source': {
              'type': 'string'
            },
            'type': {
              'type': 'string'
            },
            'stackTrace': {
              'type': 'string'
            },
            'environment': {
              'type': 'string'
            },
            'region': {
              'type': 'string'
            },
            'tags': {
              'id': 'String[]',
              'type': [
                'array',
                'null'
              ],
              'items': {
                'type': [
                  'string',
                  'null'
                ]
              }
            },
            'timestamp': {
              'type': 'string'
            },
            'metadata': {
              'type': [
                'object',
                'null'
              ],
              'additionalProperties': {
                'type': [
                  'string',
                  'null'
                ]
              }
            }
          },
          'required': [
            'message',
            'source',
            'type',
            'stackTrace',
            'environment',
            'region',
            'timestamp'
          ]
        }";
    }
}

Next

In part two we will build the Background Job application that processes messages from the message queue.