Why Clydesdale?
Expertise
Blog
Contact
Home
Clydesdale Software Logo


Follow Us: Twitter RSS



  • Looking at Umbraco 7

    After much experience and frustration with previous versions of Umbraco I decided to take another look at Umbraco version 7 which is based on ASP .Net MVC.

     

    The first thing I like about v7 is the ease of setup, just create an empty web project then install the Umbraco Nuget package which now has the MVC flag set. Next run the project and step through the wizard.

     

    At this point you have a functional CMS website. Being a person who likes to customize things I wanted to know how easy it was to insert my own code, i.e. my own controllers and views. Creating partial views to use on pages seems easy enough by creating macros (or macropartials) then adding them via the editor though this is not very MVC. What I really wanted was to have my own controller return a view.

     

    One option is hijacking routes in Umbraco, though having to deal with Umbraco’s naming convention and model inheritance seemed not needed and is baggage I did not want.

     

    The option I went with is to create my own MVC controller and view. First create a MacroPartial and in it use @Html.Action(“MyAction”, “MyController”) then add the macro to a page via the rich text editor. Of course out of the gate it did not work because that would be too easy. With some quick searching I found that inheriting from SurfaceController is the correct way to create your own controllers in Umbraco. After that all worked fine and I was able to pass my own custom model without inheriting from anything or worry about naming conventions.

     

    The other approach and/or addition to the above is to create a webapi controller and use ajax callbacks to bring functionality to partial views.

     

    This latest version of Umbraco is a step forward though it still does not have a great answer to scaling horizontally or custom development process. As with many CMS systems out there Umbraco promotes development in production which I’m not a fan. My usage pattern for Umbraco has and always will be treating it as a custom website; create custom pieces in development then deploy to test before production. Simple content is different story, actually anything which goes into the database is a different story.

    Full story

    Comments (0)

  • Methodology Mashup

    In the last post I mentioned how leaders and their influence create culture.  This post talks about the other side of the same coin, process methodology.  These two sides of the same coin, culture and process, are tightly related as certain cultures have to be present to support the process.  If the culture is such that everything has to be hammered out 100% before any work begins then agile processes may not work very well until the culture hurdle is addressed.

     

    Each methodology has good and bad things. Many of the early methodologies were very prescriptive and as time went on have become less and less prescriptive mainly because one size does not fit all. Below is a list of items out of each methodology that have been found useful. This list of methodologies is also in the order of most prescriptive to least prescriptive.

     

    The below outline is a proposed mashup built from different methodologies. The evolution of doing it better, depends on experimenting to find what works for different situations. Keeping this in mind the below is not a success plan but a sharing of information; it is up to you, the reader, to turn this information into knowledge and apply it to get the benefits.

    Things That Make Us Go Faster

    RUP

    • Vision/Business Case (aka: Project Charter): In short this is a definition of the starting point. This is a 1-3 page document that describes:
      • o The purpose/business value
      • o Vision
      • o Stakeholders
      • o who are the customers/users
      • o When
      • o Business Sponsor
    • Risk/Issue List
    • Glossary – Vocabulary used for the project.

    Scrum

    • Small focused teams
    • Release planning  - We often need to know “how big”. Using team input, average cycle time, average lead time as well as relative size comparisons of other projects a breadbox size can be fairly easy to come up with.  It is recommended delivering a time estimate range at this stage of a release.
    • Retrospectives
    • User Stories – While not a requirement to write product backlog items as user stories it helps to remind us how humans will be using whatever it is that we are building.
    • Frequent Demos – Get the feedback, let users drive. Feedback is the lifeline of a successful project.
    • Daily Scrum / Standup – Make sure everyone on the team knows about their teammates. This is not a status meeting but a team cohesiveness activity, the information gained from it is a side effect.
    • Release Often (i.e. Small Releases) – While this may not mean production every time (though big backlogs of items not released to production is not recommended) it is advisable to release to users often.

    Kanban

    • Visualize workflow – see it, know it, communicate it
    • State driven queues (e.g. backlog, ready, in progress, ready to be accepted, done)
    • Pull Based – Pull an item to work on when a person has them time instead of having it pushed onto them.
    • Limit WIP (Work in Progress/Process) – The goal is to get things done not to see how many things can get started.
    • Flow driven process – A flow driven process is about keeping things moving not about filling up every minute of everyone’s day. You will find a lot more gets done consistently in a flow driven process than a capacity driven process and there is slack in the system for change and adaption to occur.
    • Prioritization Process – In Kanban it is not imperative to have the backlog fully prioritized because a “Ready” queue can hold a small number of items. When the “Ready” queue has an opening the project stakeholders can decide which thing is next. It can be useful to keep your top five items in the backlog at the top.
    • Cycle time, Lead time and Cumulative flow are used as trailing indicators (i.e. using history of what has been done). Using Cumulative flow and a burn up algorithm current and needed run rate to deliver can be easily extrapolated.
    • Queues are used as leading indicators – The visual representation of the items moving through the queues allows for adaption of the process.
    • JIT (Just In Time) Estimation – This is where task breakdown can happen. When an item is ready to be worked on and is pulled by a “doer” then the task breakdown occurs and further detail may be gathered on the story/PBI.

    Lean

    • Identify wasteful activities/artifacts. I.e. Activities that do not have a return on the investment. Identifying wasteful activities can be difficult and will require down to earth introspection with an understanding that becoming more skillful at anything requires failure and learning from those failures. When the term failure is used it can also mean not doing things as well as they could be done; while it is not classified as wrong it is a failure of not doing it better.

     

    I did not mention XP (Extreme Programming) in the above mainly because this is from the perspective of a project/program/portfolio manager. While XP does have guidance on planning and managing I have found it to be not as practical as the above for a most situations; not to mention not every project my involve software development. I do however agree with XP practices (for software projects) which are applied by the “doer” team and guidance from a development manager/lead.

    Tooling

    Out of all the above list the biggest area that can leverage tooling is the Kanban section. The rest of the bullet points can be put into any tooling solution and many of them are human driven.

    LeanKit

    LeanKit does Kanban very well. Many thanks to Brian Button for pointing me to it. It supports easy manipulation of queues, WIP limit, swim lanes and policies. The ability to build hierarchies of boards is paramount for portfolio management. LeanKit’s portfolio management is interactive with drill downs to different boards, user stories/pbi’s and tasks.

     

    Portfolio Management Overview:

    http://vimeo.com/user9276415/review/44129945/a253134856

     

    A feature pointed out in the above video which is really use full is having a team board which may have items on it for many different projects. This happens so often on projects with supporting groups like operations.

     

    This software package does supply cycle/lead time statistics as well as cumulative flow which are important for management a project. LeanKit also has a burn up which can be added to the Cumulative flow diagram to show the current burn rate, the rate required to complete what is in the backlog and the rate required to fill the gap between these two.

     

    LeanKit is considerably cheaper than other options that provide this level of project and portfolio management.

    Full story

    Comments (0)

  • Influence Your Way to Success

    No matter what your role, influence plays an important part. Great leaders are exceptional influencers and leadership creates culture. Leaders are not bosses just as leadership is not management. If you are at a great company you probably have great leaders that have very strong personal powers.

     

    Personal Power gains influence by using 'power with' someone.

    Positional Power induces compliance by using 'power over' someone.

     

    A couple ways to think of personal and positional power which may help others are:

    • Positional power is a result of strong personal power. i.e. The person that is a rock star gets promoted.
    • Personal power is a leading indicator while positional power is a trailing indicator to leadership success. i.e. Being a great leader with your personal power shows what is being done now, positional power is a result of what you have done in the past.

     

    Personal power will get a leader further than positional power as it is based on trust and respect among team members. Different types of personal power are illustrated below.

     

    Personal Powers

    • Expert: The perception that the leader has relevant education, experience and expertise
    • Information: The perceived access to, or possession of, useful information.
    • Referent: The perceived attractiveness of interacting with the leader.

     

    Positional power is using ones position to induce someone to do something. Positional power does not require a base of trust and respect and when it is not present will lead to compliance.

     

    Positional Powers

    • Reward: The perceived ability to provide things that other people would like to have.
    • Connection: The perceived association of the leader with influential persons or organizations.
    • Coercive: The perceived ability of a leader to provide sanctions, punishments, or consequences for not performing.

     

    When a leader has positional power as well as a very loyal following it is because of their personal power strengths not their positional power attributes. As a leader, make sure you have established a strong foundation of trust and respect while using your personal powers before leveraging your positional powers.

    Full story

    Comments (1)

  • 10 Steps to a Productive Retrospective

    A productive retrospective is a key part to continuous improvement (aka Kaizen) for your team, process and culture. A retrospective is time to reflect back on what the team has done and how it can be changed for the better.

     

    A retrospective is different than a “lessons learned” as they happen more frequently and during the execution of a project. Lessons learned generally happen at the end of a project and many do not prioritize the outcome so action can be taken. For this reason, and others, lesson learned meetings tend to be less beneficial to improving the overall process when compared to retrospectives.

     

    • Hold a retrospective every 2-4 weeks depending on the team’s speed
    • As the facilitator start the time together making sure each person has a stack of sticky notes to write on.
    • As the facilitator, find a blank wall and create spaces for Start, Stop and Continue. This can be done by writing on a sticky notes and putting it on the wall. Another option is to write the titles on a white board.
    • As the facilitator explain what Start, Stop and Continue mean.

      Start: things we want to start doing.

      Stop: Things we want to stop doing.

      Continue: Things we want to continue doing (this could be something we have been doing but it is not yet a habit).

    • As the facilitator explain the process of writing down something on a sticky note and putting it on the wall under Start, Stop or Continue. An example may be beneficial.
    • Start the clock. Give the team 5-10 minutes to write their thoughts down and put them on the wall. Remind the team that retrospectives happen with frequency so this is not an all-encompassing attempt.
    • Once all sticky notes are up review them with the room. Read each item out loud and elaborate briefly so everyone has an understanding of the item. Be sure to not let the retrospective get derailed by long discussions as this is not the place where a solution is going to be found.
    • Give everyone 5 votes. Explain that they may distribute their votes among the items however they like.
    • As the facilitator take the top 1 – 3 items as the vote count dictates. It is strongly recommended that only the top item be singled out as work will have to be done on it. Borrowing a core principle of branding, focus. Focus is what makes teams great which is why focusing on the most important thing is so beneficial. Taking on all the items in a retrospective will lead to it being a distraction instead of a source of continuous improvement.
    • Explain the next steps with the top item/s and how work will be done on them.

    Full story

    Comments (0)

  • PaaS Faceoff - Azure vs AWS

    This post will cover the differences laid out as pros/cons between Azure and AWS as a PaaS, while touching just a bit on IaaS.

     

    Why PaaS? Focus on software not virtual machines and networks. A PaaS should prescribe guidance on how to build for it, easily using all the building blocks available in the PaaS. For some this is not their cup of tea though I will take delivering in a fraction of the time, and start getting product feedback, any day over building the perceived perfect technical environment.

     

    I will focus on web services and websites using IIS and the .NET framework. I will also briefly touch on some of the other pieces in a PaaS.

     

    Azure

    The main piece of Azure I will focus on is Cloud Services (aka Hosted Services). Microsoft has thought through the Cloud Services implementation as it provides guidance just by its existance.

    Pros

    • Having the Azure Cloud Services being a separate project in Visual Studio that references the service project is really helpful as the service projects are not necessarily specific to Azure.
    • Separation of the service configuration and the application package is really key as it allows one package to be created and apply a configuration for different environments.
    • Configuration is at the deployment level and the Azure configuration manager makes it very easy to work with. The Azure configuration manager first checks the Azure configuration then checks the local configuration appSettings; very useful if a key/value is forgotten to be transferred to the Azure configuration if you are moving to Azure.
    • Production and Staging slots for Cloud Services is so nice to see. Many years ago I did this with symlinks so there could be quick action taken if a deployment went wrong and any priming that needed to be done could be done. For test environments I tend to use the staging environment as the url is something that people cannot guess.
    • Microsoft provides emulators for compute, table storage, queues and blob storage which is very handy when developing locally.
    • Azure In-Role Caching is so easy to use and provides a fast place to put session state if needed.
    • Really good developer experience.
    • .NET Api’s are clean and easy to use
    • Azure .NET SDK has transient fault handling built in for storage components (blob, queue, table) and for service bus the enterprise library transient fault handling block can be used.
    • EF 6 now has SQL Azure transient fault handling
    • Worker Roles for none web oriented processing (or open up a port and you can host in them if iis is not needed)
    • Per minute pricing!

    Cons

    • Cannot have a non-public Cloud Service which is load balanced and autoscaled.

    Suggestions

    • I would love to see a Cloud Service which could only have other Azure Cloud Services talk to it and was still load balanced, much the same way SQL Server does with a firewall rule. Yes there are virtual networks though you lose load balancing and honestly this is a bit overkill for the intended use. All I want is to not have the public world script kiddies be able to hit the service endpoint.
    • Bring the PaaS and IaaS together with something similar to AWS Cloud Formation.

     

    AWS

    The main pieces of AWS I will focus on in Elastic Beanstalk. Elastic Beanstalk is the closest component of AWS to comparison to Azure PaaS though using Cloud Formation you can assemble a template which is close.

    Pros

    • SNS notification hooks
    • Can be used in conjunction with Cloud Formation. Yes this is a bit outside of a PaaS though IaaS and PaaS have a fairly large intersection in today’s enterprise.
    • Can create Elastic Beanstalk applications inside a virtual private cloud.

    Cons

    • Configuration is part of the Web Deploy package which means you cannot have the scenario outlined above with Azure. Instead a package per environment would have to be created though some customization to the packing process would need to happen to get the correct configuration file. More can be found here - http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html.
    • NOTE FOR ABOVE: Using Cloud Formation you can create different configurations for different environments though now this is outside the AWS PaaS solution to achieve something similar to what Azure can do out of the box.
    • NOTE NOTE FOR ABOVE: OpsWorks is another widget in AWS which can do much of what the above two approaches can do though a bit differently.  Honestly, this looks really cool though I do not see windows as an option and I would not call it a PaaS but really cool none the less.
    • No production and staging slots where a virtual ip could be flipped and provide a quick back out to a deployment gone wrong or to do any priming before putting into production.
    • Api’s could be cleaner. After moving a product from AWS to Azure I was impressed how much easier the Azure .NET Api’s were to use.
    • Pricing is per hour.

    Suggestions

    Pretty simple, do the stuff in the pros list of Azure.

     

    Final Thoughts

    Both Azure and AWS are great though I have found Azure to be more purposefully built. By purposely built I mean to solve a specific problem which is what a PaaS should do. AWS feels like a bucket of widgets, which is why AWS is considered more of an IaaS.

     

    There is of course the intersection of PaaS and IaaS. A good PaaS is a much harder nut to crack, too loose it becomes nothing more than a template on top of an IaaS, not really solving a specific problem. To tight and it becomes inflexible and does not fit the bill for use with a majority of software. AWS does a better job of handling this intersection than Azure though there is room for improvement, both in the AWS PaaS and how it works with the IaaS. I’m excited to see where Azure decides to take it.

     

    At the end of the day I prefer Azure for a PaaS as it streamlines and guides the process of delivering software better than AWS while still giving flexibility.

    Full story

    Comments (1)

  • Will The Cloud Raise The Need For Quality?

    With the cloud enabling people and companies to deliver software to market faster than ever before will the bar for quality be raised? In many cases the cloud as removed or lessen the barrier to entry into a market.


    The software industry is growing and thus more software available. The cloud (and mobile industry) are enabling this growth in software. As more software is delivered to market via the cloud, more of this software will be lower quality.


    This has been shown already by the ease of app delivery in the mobile world. How many of those QR code reader apps actually work?


    If there is an increasing amount of lower quality software then the software that does work becomes more valuable.


    My answer is YES! The cloud will raise the bar for quality over time as there are more options for users and they are more likely to pick an option that is known to work, so make sure your software is rock solid!

    Full story

    Comments (0)

  • Outcome vs Hourly Companies

    I categorize companies into two buckets, those that count hours and those that drive towards outcomes.


    Chances are we all have worked for or currently work for an hourly company.  You know the type, the ones that want to track every minute you work and only when you work in the office does your work count.  Hourly companies have a fetish with the 40 hour work week and say things like “40 hours is a minimum”.


    What the hourly companies do not know is counting hour’s breeds compliance and everything will take longer to get done.   This type of company shows it does not trust its employees at its core and is driven by the fear that employees are stealing wages by not putting in hours regardless of how well goals are being achieved.


    Outcome based companies are focused on achieving goals (i.e. outcomes).  This is not to say they are not interested in the number of hours people put in to achieve these goals though the hours are footnote, an analytic if you will.  Outcome based companies trust the people they employ and hold them accountable for Getting Shit Done! (GSD for short) They trust they will deliver what is needed and put in the time to do it, whether it is 9-5 Monday – Friday or working late at night and a bit on the weekend with a Wednesday off.


    I’m a GSD kind of guy and I prefer to work with other GSDers.  If a timesheet is more important to a company than reaching a desired outcome which will drive a strategy, then I feel that it may be challenging for that company to do anything special in this world.

    Full story

    Comments (0)

  • The Logging Truth

    The reality is that there are more software engineers (aka developers) in this world that do not understand logging than do.  In every modern coding language there is a logging system already in place.  For example, in .NET there is Tracing which writes out to a collection of trace listeners.  Unified Logging is simply another trace listener which sends data to another repository. 


    The structure that exist in many organizations today does not help as the task of putting logging in is in the hands of developers.


    So way is this the case?  Well, developers are not held accountable for knowing what is going on inside the software they build.  Many times there is an application support team that the responsibility gets thrown over the wall to.  The few developers out there that have had the responsibility of supporting an application and being proactive understand how important getting the right information AT THE RIGHT TIME IS.


    Logging information is half the story, in other words collecting information is one half to the solution.  The other half which is equally important is knowing when something happens that needs attention.  These are two sides of the same coin and each side has a different person held to different accountability standards.


    Because these two roles have different motives there needs to be a way to independently set up notifications outside of the information collection.  Whether it is Unified Logging or not this is primary objective of any software monitoring setup; let people get notified when they want to get notified, NOT when a developer says they should be notified.


    Then analysis comes in...

    Full story

    Comments (0)

  • IIS Express Crashing

    Recently I had an aggravating issue with IIS Express.  Turns out it was something very simple.  I added some warmup config to the web.config like below:

     

    <applicationInitialization remapManagedRequestsTo="Loading.html" skipManagedModules="true">
          <add initializationPage="/" />
    </applicationInitialization>

     

    For some reason this made IIS Express blow chunks; I assume that IIS Express does not support applicationInitialization.

     

    Removed that piece of configuration as it was a nice to have and IIS Express runs as expected.

    Full story

    Comments (0)

  • Resiliency with Azure Caching

    Using Azure caching can be easy when everything works though there have been many times when everything has not worked.  This post is applicable to dedicated caching (previously known as caching preview) and shared caching.  So how do you build in resiliency to your caching?

    First, let’s assume data is being cached from a SQL Azure database, though what is outlined in this post is applicable to other data stores as well. First off let’s outline what could go wrong.

    1) The cache becomes unavailable (could be because of a variety of reasons)

    2) The database become unavailable

    There are many reasons for number one to happen and the easiest way to handle it is to fall back to retrieving from the database.  You can have code that looks something like the below:

     

    public RealyCoolCacheItem GetRealyCoolCacheItem(string key)
    {
         try
         {
              var cacheItem = GetCacheItem(key);

              if (cacheItem == null)
              {
                   var reallyCoolItem = RetrieveReallyCoolCacheItemFromDb(key);

              if (reallyCoolItem != null)
              {
                   DataCache.Put(key, reallyCoolItem, TimeSpan.FromHours(2));
              }

                   return reallyCoolItem;
              }
              else
              {
                   return (RealyCoolCacheItem)cacheItem.Value;
              }
         }
         catch (Exception ex)
         {
              //Log the Error
         }

         //Would end up here if working with the cache throws
         return RetrieveReallyCoolCacheItemFromDb(key);
    }

    The second situation of a database going down is a bit more interesting.  The common direction is to have a fail over database though this is a big stick for a simple problem.  The cache is used to read frequently used data.

    The solution Unified Logging has implemented is to serialized cache data to blob storage each time it is retrieved from the database overwriting the existing blob if necessary.  If the retrieval from the database fails it then reads the last blob written and things keep working.  This is how the submission endpoints of Unified Logging keep up and running when these undesirable events occur.

     

    private RealyCoolCacheItem RetrieveReallyCoolCacheItemFromDb()
    {
         try
         {
              //Retrieve from DB HERE
              var reallyCoolDbItem = magicDb.GetReallyCoolItem();

              //Save the item to the secondary failover datastore
              //In this case the secondary store is blob storage
              // RetrieveReallyCoolCacheItemFromDb_Failover is a constant
              failover.Save(RetrieveReallyCoolCacheItemFromDb_Failover, reallyCoolDbItem);

              return reallyCoolDbItem;
         }
         catch (Exception ex)
         {
               //Log the error

              //The retrieval from the db has failed so get the item form the secondary datastore
              return failover.Retrieve<RealyCoolCacheItem>(RetrieveReallyCoolCacheItemFromDb_Failover);
         }
    }

    At this point you are probably saying, all well and good but what is “failover”.  It is a simple class which implements a simple interface which has Save and Retrieve.

     

    public interface ICacheFailover
    {
          void Save<T>(string key, T serializableObject) where T : class;

          T Retrieve<T>(string key) where T : class;
    }

    Full story

    Comments (0)

  • Watch the JavaScript Widgets

    Adding that piece of javascript to a web page is so easy to get a nice shiny widget, whether it be a like button, badge or a cool popup window but remember to keep in mind the page load times and the amount of javascript that is being downloaded to the clients browser.

     

    Pages get bloated very quickly and soon over half of what is downloaded for your page is javascript. In my experience very little of the javascript in question actually makes a better user experience.

     

    What a lot of this javascript does is render out html and download images.  Let’s look at the steps, the javascript is downloaded from the vendors server, then it is run which produces html and downloads images and possibly more javascript is rendered to the browser.

     

    An alternative is to look at the output of the javascript is and do it yourself without the widget javascript. This means download the widget image put it on your server or cdn (and control the caching) and write the html.  Make a decision whether to have the cool fly over or not or if just the badge/image will suffice.  Easy does not always mean well preforming, sure one widget is not always bad but they add up quickly.

    Full story

    Comments (0)

  • Unified Logging Releases WordPress plugin and PHP Connector

    I’m pleased to announce that Unified Logging has released two new connectors, one for WordPress and one for PHP.

     

    You can find out more about these connectors from these related posts:

     


    Sign Up and Get Collecting!



    Full story

    Comments (0)

  • SkyDrive I want to Love You but….

    Is SkyDrive a good alternative to DropBox?

     

    NO! and here is why. 

    1. 1)  You cannot share a folder and have that folder sync to everyone's desktop thus creating a collaborative environment that also gives you notifications when something has changed.  I have found the notifications cut down on emails because people see when something is updated and things just seem to keep moving on.
    2. 2)  When a folder or item is shared the person it is shared with does not have to accept the sharing invite.  This means I could get a list of Live Id’s and share a million items with each Live Id and it would totally clutter up their WEB BASED shared folder.

     

    Box (formerly Box.net) looks like a good alternative but the pricing is such that when you hit your 5 GB mark you have to go to the $15 dollar a month plan.  Given $15 a month gets you 1000GB which is a good deal if you are using it.  If not, this becomes a costly service.

     

    I do think DropBox needs to consider lowering their prices a bit and also consider a different pricing model that bridges the free to pro versions which would let someone purchase storage space.  I could easily justify $20 a year for 15GB of storage but it is hard for me to justify $100 for 100GB of storage because I will not use all that space.  Of course there are free ways of getting 15GB of storage….

    Full story

    Comments (0)

  • Windows 8 meh

    I recently updated both my laptop (Mac Book Pro) and the beast of a desktop to windows 8.  The short of it is I feel it takes me longer to do things as compared to Windows 7.

     

    I will elaborate.  Every time I log in I press window+D to get to the desktop.  There are some hack arounds but why is the desktop suddenly a second class citizen?  These machines are not touch devices so looking at the metro ui is just annoying IMO.  I do not want to see full screen apps like live messenger on a laptop or desktop, sure on a slate/tablet which is touch enabled this makes sense.

     

    I use the window key a lot to just start typing and find what I’m looking for.  Quite honestly I find the way this works in Windows 8 to be distracting and takes more time for me to find what I’m looking for.

     

    I’m hoping Microsoft will address these concerns which are shared by many more than just me and release an update which allows Windows 8 to work more like windows 7 for non touch enabled devices.

    Full story

    Comments (0)

  • Unified Logging - Ditch the Monthly Subscription

    Why pay for something you are not using or not using enough of!

     

    Monthly subscriptions are a thing of the past for enabler products like Unified Logging.

     

    By enabler products I mean software products that enable you to build better software products. Unified Logging wants you to build better software products. This is the motivation and passion behind Unified Logging and why we have a per message pricing model which is very affordable so everyone can take advantage of it.  

     

    Unified Logging - Logging for All  

    Full story

    Comments (0)

  • Unified Logging Offers a New North America Option!

    Unified Logging is proud to announce the availability of a new data collection option for applications that are sending information within the North American Region. In your profile you will notice the addition of a North America Submission Url which is optimized for the North American Region.

     

    Enjoy!

    Full story

    Comments (0)

  • Passwords Everywhere!

    For many years now I have been using KeePass to store my passwords.  Then along came DropBox, awesome!  Drop my password database in there and my passwords are everywhere.

     

    Then came my iPhone and there was no free way to use my KeePass database on my phone (or iPad) until now.  This morning I found the FREE app KyPass which does the trick.  Just drop your password database in a DropBox folder named Crypted then in KyPass change the settings to use DropBox and BAM!  Passwords everywhere




    Full story

    Comments (0)

  • Unified Logging Featured on Channel 9

    Recently Unified Logging was featured on Microsoft's Channel 9. Check out some of the inside details of how Unified Logging does its magic.

    Full story

    Comments (0)

  • Unified Logging Releases NuGet Packages

    Unified Logging® is proud to announce the availability of NuGet packges for .NET, Windows Phone and Silverlight. Get up and running fast with Unified Logging and these new packages:

     


    Full story

    Comments (0)

  • Umbraco - Membership Login Slow SOLVED

    The last post on this topic showed how to get Umbraco 4 running in Azure without using the accelerator.


    Since then there have been some discoveries, the main one being a very slow login for members.  As shown in the previous post the Umbraco settings are updated when the role starts with the ip address of the instances in the deployment.  This all works fine and dandy until the role instance count is changed. 


    Each time a login happens the distributed servers in the list are pushed to and when an ip is invalid it takes time for that call to fail.  You can see this error in the umbracoLog table or you can implement your own external logger and use something like Unified Logging so all logging information can be seen.


    Fortunately there is an easy fix, handle the  RoleEnvironment.Changed event and if a change of type RoleEnvironmentTopologyChange occurred run UmbracoAzureSetup.Setup() again which will update the servers in the distributedCall list.


    private void RoleEnvironment_Changed(object sender, RoleEnvironmentChangedEventArgs e)
    {
                var topoChanges = e.Changes.OfType<RoleEnvironmentTopologyChange>();

                if (topoChanges.Any() == true)
                {
                    //The distributed server list needs to updated
                    UmbracoAzureSetup.Setup();
                }
    }

    Full story

    Comments (0)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. Next page