A Windows Azure infographic (and some new blogging structure)
Windows Azure is now great. And please welcome Redmond Recap.
The substance
I have posted an infographic describing Windows Azure’s latest incarnation. (seriously, go check it out; it’s more info than graphic, but still!)
Umm where?
Then a short explanation on the posting forum. After living with my current blogging arrangements for the last 8 years, I have decided to spin off a new blog, Redmond Recap. It will focus on relevant real-world Microsoft-related product information – in other words, distributing the more abstract technical content I produce daily to support our clients in strategic and tactical issues. Redmond Recap will look more like a news site, but won’t be one; I don’t have time to maintain one, nor is there need. It aims at being a lightweight version of real technology strategy consulting, with carefully deliberated recommendations and ponderings.
This blog will remain in operation. I will use it to post notes of more personal (talks, produced material, project descriptions etc.) or specific (small-surface tips, tutorials etc.) nature. I hope these arrangements work to serve my rather diverse readership better – and also lower the bar for me to post minor notes here.
Feel free to follow Redmond Recap – in Twitter or RSS if you want.
June 18, 2012
· Jouni Heikniemi · No Comments
Tags: Azure · Posted in: Cloud, General
Inexperienced developers available now!
Being listed as a CEO of a software company in the western world gets you a steady torrent of outsourcing offers from India. Starting from 10 euros an hour, you can have your share of developers – or in the case of “enterprise technologies like Silverlight” (!), you would maybe pay fifteen. But what kind of developers would you get for that?
One company bothered to attach a brochure outlining their company. Below, you can see how their staff is profiled by years of experience.
Now, we’re talking about a company using the term “technical expertise” several times in their marketing. Yet, they are implictly pointing out that your hired dev team comprises of people with a one year experience on the average. And two years of drudgery – which may not even be a complete project – is stated to be the standard requirement for becoming a lead. Oh, and naturally three years of “leading” (whatever that may mean) qualifies you to manage projects.
The answer to the intro question: You get what you pay for. It’s just rarely so obviously advertised.
May 21, 2012
· Jouni Heikniemi · 3 Comments
Tags: outsourcing · Posted in: Misc. programming
TechDays Finland 2012: my talk on unit testing fundamentals
TechDays Finland 2012 is going on at full speed. I was lucky to be talking in the first slot right after the keynote, so that the rest of the event is way less stressful for me :-)
Update 2012-03-29: The video link was added.
The venue: Helsinki Fair Centre
As with a few previous TechDays events, the venue is Helsinki Fair Centre. For the past years, TechDays has occupied the conference wing. This year, it spans quite a bit of the actual fair halls as well. Hugely more space – no more irritatingly crowded corridors.
Also, bigger rooms for sessions! Most sessions have fit into their rooms, and a couple have been handled by adding an overflow room with video/audio streaming. This thing is getting quite professional, indeed!
With the first day soon behind us, a second one still lies ahead. Quite a few presentations going on – there are a dozen simultaneous tracks! And with 2800+ peeps aboard, it’s definitely the biggest Microsoft pro event in Finland so far.
My talk: Unit testing
The title for my talk was “Picking up unit testing” (a liberal translation; I did give the talk in Finnish). Instead of going through the basics of unit testing techniques (setups, assertions, fixtures, …), I took a different road: I talked about the politics, the goals and the various logical approaches to unit testing. To the right: The beginnings of the crowd as I saw it.
Here’s a shortened list of the items I discussed:
- How to convince your boss into unit testing?
- What to expect from unit testing?
- How much additional time do I need to spend – and when?
- What are the horrible dependencies and why do I need to tackle them?
- What kind of project/module should I start unit testing with?
- Should I go with easiest-first, toughest-first or maybe just test when fixing bugs?
- Black box vs. white box
- The pitfalls of code coverage
The slides are here, and a video recording is available on YouTube. Also, I referenced two of my blog posts on code review – a process I heartily recommend also for unit test code:
http://bit.ly/katselmointi1
http://bit.ly/katselmointi2
Thanks to everyone who attended – and enjoy your TechDays!
March 8, 2012
· Jouni Heikniemi · 6 Comments
Tags: presentations, unit testing · Posted in: Misc. programming
My talk on ASP.NET and modern web development
On 18th January I was speaking in an HTML5 seminar arranged by Microsoft Finland (agenda in Finnish). Although my presentation was in Finnish, you can find a short link-annotated recap of the presentation – and links to the material – here.
The material (partly in Finnish)
- My PowerPoint slide deck
- Demo #1: HTML5 Editor templates, Modernizr
- Demo #2: JSON Data transfer, OData
I also demoed WebSockets and SignalR using Microsoft’s demos from the //build conference. Paul Batum has a detailed blog post on those demos. The demos themselves are available on GitHub.
The presentation was recorded and will be available on YouTube. I’ll update this blog post with the link once the video is out.
A short recap of the presentation
I started by reminding the audience that ASP.NET is no longer equal to Web Forms, and the pain points regarding web forms are largely no more. After that, I went on to discuss the fact that HTML5 and CSS3 aren’t really server-side features, and no server-side feature can greatly affect the effort of building HTML5 apps.
With that said, it was time for the first demos. I pulled up the Demo #1 (Html5DemoApp2.sln), containing a customer info form. I pointed out that MVC3, using client-side validation, already uses HTML5 data attributes to convey the validation criteria and messages. To enhance the HTML5 experience, I pulled HTML5 Editor templates from NuGet, enhancing the form with HTML5 input types. This was demoed with Firefox validating the email field properly, as well as Opera rendering the birthday field as a datepicker.
Moving on, I started talking about passing data from the server to JavaScript – a topic generally considered surprisingly difficult. I first showed Demo #2 (Html5DemoApp3.sln) with a simple ASP.NET action method returning a JSON representation of three customers, showing the JavaScript needed to render this data to the page. Then, I took a quick dive into OData, and showed how develop a customers-returning API.
The last segment of the session was all about fast communication between the client and the server. I showed a few slides presenting polling, long polling and finally, WebSockets. I then went on – with some mandatory demo effects! – to show how to build a simple chat application using IIS 8 and Visual Studio 11. Further on, I discussed how the abstraction level of websocket communications can be raised by leveraging the SignalR library.
For the conclusion, I reminded the audience that Microsoft developers can no longer afford to live in their private closed confines. Microsoft has opened up by supporting open source endeavors and open protocols, and developers should follow suit. Building HTML5 applications with .NET is in fact very easy – as far as the server platform can help you, the Redmondian stack pretty much does it.
January 20, 2012
· Jouni Heikniemi · No Comments
Tags: ASP.NET MVC, HTML5, JavaScript, OData, presentations, SignalR, WebSockets · Posted in: Web
Valio.fi deep dive #8: Resources and ORM
Now we’re standing at the edge of the code pool. Let’s dive in! Good background reading: Deep dive #4 on ORM choice, deep dive #7 on database schema.
The resource model
As I described in my previous post, we ended up storing all different types of resources in one table. However, we model each of them as a separate class (a subclass of Resource). Even though resources themselves don’t have too many fields to separate them from one another, we have lots of code specific for each resource type. Thus, strong typing of resources improves intellisense experience and type safety. Mapping-wise, we use the Type column as the discriminator.
When the project was born, NHibernate 3 wasn’t out yet, so we used NH 2.x + FluentNHibernate – thus, the following mapping examples are in the fluent syntax instead of the canonical XML.
Before we look at that, let’s recap what we’re trying to achieve:
We have a “data” column of type xml. That XML contains two distinct elements of data (simplified but sufficient truth): First, the widgets structure, which is really common for all types of resources, and second, the fields specific to each resource type.
This duality causes us some headache. When mapping an XML column in NHibernate, you have to map it as a custom type. A user type mapper does not support splitting a table column into several properties. It does allow us to serialize and deserialize a complex object, though.
Getting the containment right
Since our data column contained structure specific to a resource type, we had to map the data xml column in Resource’s subclasses. Enabling the widget data (common to all Resources) to be accessed on the Resource base class level only required a little class design.
Our Resource class contains a property of type ResourceData which contains an IList<WidgetConfiguration>. ResourceData is in fact abstract: each of the Resource’s subtypes also defines a ResourceData derivative that matches its own set of custom fields. For example, a RecipePage.CustomData class adds a RecipeId field.
At the other end, the subclasses of Resource introduce fields driven by their custom data. For example, RecipePage exposes:
public virtual Guid RecipeId { get { return ((CustomData)CustomResourceData).RecipeId; } set { ((CustomData)CustomResourceData).RecipeId = value; } }
The typecast is somewhat ugly and expects that all Resource.CustomResourceData instances must be in sync with the equivalent Resource subtypes (a RecipePage always expect a RecipePage.CustomData). Of course, this isn’t much of an issue in practice.
We could have added some strong typing by making Resource a public class Resource<TCustomData> where TCustomData : ResourceData, and then have a “public virtual TCustomData CustomResourceData” on it. That way, a RecipePage would have inherited from Resource<RecipePage.CustomData> and thus gained a strongly-typed handle on its custom data.
The main reason we didn’t do this was because it would have required us to introduce an IResource or a ResourceBase for all those scenarios where we didn’t care about the resource type – the vast majority of all resource-based code. In retrospect, this would probably be a good idea. It was just too risky to implement at that stage of the project. Still, it’s no more than a small blemish – the typecasts really don’t affect your everyday programming at all, and modifying the existing resource types is pretty uncommon.
Subclass mapping and other fun
To load the resources this way, we then had to implement quite a few NHibernate mappings. We had a mapping for each of the resource type classes:
public class ProductPageMap : ResourceSubclassMap<ProductPage, ProductPage.CustomData> { public ProductPageMap() : base("Product") {} }
The parts that had to be changed in each of the mappings are in bold. In short, we had the mapping name (not interpreted, just a convention), the resource class type argument and the custom data type argument. To support such a (relatively) terse declaration, we had implemented the ResourceSubclassMap helper:
public class ResourceSubclassMap<TResource, TCustomDataType> : SubclassMap<TResource> where TResource : Resource where TCustomDataType : ResourceData, IEquatable<TCustomDataType>, new() { public ResourceSubclassMap(string discriminatorValue) { Extends(typeof(Resource)); DiscriminatorValue(discriminatorValue); Map(r => r.CustomResourceData).Column("data").CustomType(typeof(ResourceDataMapper<TCustomDataType>)); } }
To sum it up, wiring up mappings of this kind was actually very simple. Also, due to the nice extensibility of FluentNHibernate's mapping syntax, we could even easily throw in some calculated fields. For example, since we commonly accessed the RecipeId property of the RecipePage class (a member stored in the data xml for that resource type), we declared a public Guid RecipeId and specified a mapping like this:
public class RecipePageMap : ResourceSubclassMap<RecipePage, RecipePage.CustomData> { public RecipePageMap() : base("Recipe") { Map(ap => ap.RecipeId) .Formula("cast(data.query('/data/recipeId/node()') as nvarchar(max))") .ReadOnly(); } }
Why bother, you ask? Since our RecipePage objects have a strongly typed notion of RecipePage.CustomData, why not just access that?
This is one of the places where we can actually deliver some performance improvements by tweaking the query model. Defining the formula of the XML access – the data.query syntax is XQuery that gets executed inside SQL Server – enables us to run NHibernate HQL queries on that particular XML data fragment. A condition of “RecipeId = foo” gets translated into SQL, and SQL Server does a good job of optimizing the XML field queries.
Without this trick, we would have to load all the RecipePages into memory and filter the list there; of course, for most scenarios they’re all cached anyway. Still, using the formula enabled us to skip caching on scenarios that would have otherwise been far too slow and still required real-time data with no cache impact surprises.
ResourceDataMapper<T>, then?
And finally, it all winds down to ResourceDataMapper<T>, referenced in the ResourceSubclassMap constructor. How does that construct the data objects? Well, that’s easy, because we simplified quite a few things. ResourceSubclassMap is just an NHibernate IUserType implementation, essentially meaning that it has a converter from the database format (in this case, xml) to the complex user type (in this case, the CustomData type). Skipping some boilerplate, here’s the meat:
public class ResourceDataMapper<T> : XDocumentCompositeField<T> where T : ResourceData, IEquatable<T>, new() { public override T XmlToComposite(XDocument data) { var result = new T { RedirectUrl = data.XPathSelectElement("/data/redirectUrl").NullSafeSelect(e=>e.Value), AvailableWidgetZones = WidgetConfiguration.GetZones(data.XPathSelectElement("/data/zones")), WidgetConfigurations = WidgetConfiguration.ParseWidgetConfigurations(data.XPathSelectElement("/data/zones")) }; result.ParseXml(data.XPathSelectElement("data")); return result; }
The most important things here are:
- ResourceDataMapper takes a type argument – the custom data type – and requires that it’s construable (the new() constraint) and that it inherits from ResourceData. It also requires IEquatable<T>, but that’s just to satisfy the C# compiler’s insistence to make sure we’re following the IUserType contract.
- Those assumptions are then relied upon: a new T – for example, a RecipePage.CustomData – is constructed. Next, its common fields, i.e. the ones in every ResourceData object such as the widget list and the redirect url, get populated. Finally, an abstract method called ParseXml is then called.
- And no surprise here, each of the CustomData implementations then have a ParseXml implementation of their own. An example follows.
internal override void ParseXml(System.Xml.Linq.XElement xmlSource) { var configuredRecipeId = xmlSource.Descendants("recipeId").FirstOrDefault().NullSafeSelect(n => n.Value.TryParseGuid()); if (configuredRecipeId.HasValue) { RecipeId = configuredRecipeId.Value; } }
We also have all of this backwards, i.e. each CustomData has a SerializeToXml() method that allows us to save the changes in the objects. A ResourceDataMapper then calls this method and NHibernate gets its precious XML to be stored in the database. Not particularly complicated once you have it written up!
Widgets come next
I already made a passing mention to widgets getting parsed from the XML above. Yep, they do get pulled out by the ResourceDataMapper. But after that, they are brought to life and rendered. In the next episode, I’ll cover the widgetry. It’ll be just another example of the same paradigm we’re using here, but with a bit more actual logic (rendering, editors and whatnot). Until next time!
January 10, 2012
· Jouni Heikniemi · 3 Comments
Tags: data access, NHibernate, valio.fi · Posted in: .NET, Web
Upcoming: SANKO event on ADM code modeling & generation
The Finnish .NET User Group SANKO hasn’t been particularly active lately, much due to the busy schedules of potential speakers. But we’re back on the roll: On December 14th on 15:00-17:00 Finnish time, we’ll be having a session on an application modeling methodology called ADM.
ADM is the brainchild of Finnish senior developer Kalle Launiala, who currently works as the CTO of Citrus Oy. He has been blogging about ADM extensively at abstractiondev.wordpress.com. In the SANKO session Kalle will take the stage and define ADM, explain his ideas, ADM’s use of Visual Studio’s T4 template mechanism and the impact to software development. As usual, there will be time allocated for discussion.
The event takes place in Microsoft Finland’s auditorium in Espoo. The presentations and the discussion will be in Finnish. If you’re interested, register here with the invitation code “E023B1”. Welcome!
Oh, and of course you can also follow SANKO on Facebook or join our LinkedIn group.
November 24, 2011
· Jouni Heikniemi · No Comments
Tags: code generation, SANKO · Posted in: Misc. programming
Valio.fi deep dive #7: The resource data storage model
After the previous post on how our CMS works on a concept level, it’s time to explain the technical details. Note that remembering the previous post’s concepts is pretty much mandatory for understanding this discussion. Oh, and this gets quite detailed; unless you’re designing a content management platform, you probably don’t care enough to read it all. That’s ok. :-)
The resource table
So, this is our Resource table in its entirety (at one stage of the development, but recent enough for this discussion). Many of the fields are not really interesting. In particular, there are a bunch of fields concerning ratings, view counts, likes and comments, which are cached (technically trigger-updated) summaries of other tables storing the actual user-level interaction. Others are simple mechanics like creation/modification data and publication control.
The interesting bits are:
- Url is what maps a given incoming request to a resource. A typical example is “/reseptit/appelsiinipasha”, a reference to a particular recipe.
- Type is a string identifying the resource type, e.g. “Recipe” or “Article”.
- Layout is a string that describes a specific layout version to use. Typically null, but for campaign articles and other special content, we could switch to an alternate set of MVC views with this.
- Data is an XML field that contains quite a lot of information. We’ll get back to this later.
This is the key structure for managing resources. There’s very little else. One more table deserves to be mentioned at this stage: ResourceLink. A resource link specifies (unsurprisingly) a link between two resources. The exact semantics of that link vary, as represented by the html-ish rel column.
Typical rels are “draft” (points from a published resource to its draft), “tag” (points from a resource to its tags – yes, tags are resources too) and “seeAlso” (a manually created link between two related resources).
Modeling the resource types
The resource types – Recipe, Article, Product, Home, … – are a key part of the equation, because the resource type specifies the controller used for page execution. It also specifies the view, although we can make exceptions using the Layout property discussed above.
Storage-wise, how do these resource types differ? Actually, the differences are rather small. They all have very different controllers and views, but the actual data structures are remarkably similar. For example, a product resource has a reference to an actual Product business object (stored within a Product table and a few dozen subtables), but that’s only one guid. Some resource types have a handful of custom fields, but most only have one or two.
We originally considered a table per subclass strategy, but that would have yielded ridiculously many tables for a very simple purpose. Thus we decided to go for a single table for all resources, with the Type column as the discriminator. However, the option of dumping all the subclass-specific columns as nullables on the Resource would have yielded a very ugly table (consider what we have now + 50-something fields more).
XML to the rescue
Enter the Data xml column. “Ugh”, you say, “you guys actually discarded referential integrity and hid your foreign keys into an XML blob?”. Yeah! Mind you, we actually even considered a NoSQL implementation, so a relational database with only partial referential integrity was by no means an extreme option.
Let’s compare the arguments for and against the XML column.
Pro-XML | Con-XML (more like pro-tables) |
|
|
Widgets, the last straw
Ultimately, much of the XML decision finally hinged on the question of where to store widgets. A short recap: A typical page has perhaps a dozen widgets organized in a few zones. The widgets are selected from something like 30 widget types, each of which has a specific set of parameters which must be stored per widget instance.
Now, the widget question is just like the resource one, but on a smaller scale. In order to construct a relational model for the widgets, we would be creating a dazzling amount of tables. For example, if a Product Highlight widget had a specified fixed product to highlight, a pure relational implementation would have at least the following tables:
- Widget (with at least resource id, zone name and a position index, plus a type reference)
- ProductHighlightWidget (with the same identity as the Widget, plus columns for single-value configuration properties)
- ProductHighlightWidget_Product (with a ProductHighlightWidget reference, a Product reference and an index position)
Granted, a deleting cascade would work great here except for collapsing the index positions, but even that we could easily handle with a trigger.
But I’m accepting some compromises already: I don’t have a WidgetType table, and my zone name is a string. Relationally speaking, a Resource should actually have a reference to a ResourceLayout (which technically defines available zones), which should then be linked onward to ResourceType (which defines the layout set available). Oh, and we’d still need a ResourceLayout_Zone to which Widget would link, but since the zone set is actually defined in cshtml files, who would be responsible for updating the RL_Z table?
The previous mental experiment reveals some ways in which many applications could benefit from the use of NoSQL solutions. We only touched on one property, and those widget types contain quite a few of them.
As it was obvious that we needed XML to store the widget setup, it became quite lucrative to use the same XML blob for resource subclass data as well.
After the widgets discussion, there is one more thing I want to highlight as a benefit of XML. Since most of the page’s content is defined in that one blob, certain scenarios such as version control become trivial. For example, publishing and reverting drafts mostly involves throwing XML blobs around. Compare this to the effort it would take to update all the references properly.
Finally, you may want to ask “Why XML instead of, say, JSON?”. XML admittedly produces relatively bulky documents. However, it has one excellent characteristic: we can easily query it with SQL Server, and that makes a huge difference in many scenarios. Implementing the equivalent performance with JSON would have required cache fields, reliable updating of which would in turn require triggers, but since parsing JSON with T-SQL is a pain (to say the least), it would have drawn SQLCLR in as well. Thus, XML was actually simple this time.
Now show me the XML!
The actual size of the XML varies heavily by resource. On one of my dev databases, the shortest data document is 7 bytes; the longest is 28 kilobytes. But here’s a fairly short one:
This is for a recipe page, where the only real property is the recipe reference. The page template also specifies a zone called “Additional”, but it has no widgets specified – thus, a very short XML document.
Here’s a snippet of a significantly longer one.
This is from an article page. As you can see, the Article resource type has a subtype field declaring this instance as a “product article”, i.e. one that describes a single Valio product or a family thereof. Since an article does not derive its content from a linked business entity, its content is mostly located in widgets. Therefore, the <zones> element tends to be fairly long with a hefty stack of various widgets. In this example, you can see an image carousel, a text element and a brand banner (with a specific layout set – yeah, we have those for widgets too).
After the data comes the code
When I initially set out to write the description of our CMS features, I was thinking of a single long post. I have now rambled on for three medium-sized ones, and I still haven’t shown you a line of code. But, I promise that will change in the next episode: I will wrap up the CMS story by discussing our NHibernate implementation of the above data structures. And I’ll go through the controller/widget rendering framework as well.
Meanwhile, if there are some questions you’d like to have answered, please feel free to post comments.
November 19, 2011
· Jouni Heikniemi · No Comments
Tags: CMS, database, valio.fi · Posted in: Web
Valio.fi deep dive #6: Features of our custom CMS
In the last post, I touched on the choice of using a CMS product or writing your platform yourself. We picked the custom platform approach, and this time I’ll tell you what that led into.
What’s in a Content Management System?
Wikipedia defines CMS in a very clumsy and overgeneric way. Let’s not go there. Given my last post’s definitions on applications and sites, I’ll just list a few key tenets of a modern CMS:
- Administrators must be able to produce and publish content.
- The content will consist of text and images, which will be mixed relatively freely.
- The administrators must be able to define a page structure (and the URIs) for the content.
- Typically, the administrators must be able to maintain a navigation hierarchy and cross-linkage (tagging, “similar content” highlighting etc.) between the pages.
- The system must be ready to accept user feedback (comments, likes, whatever).
These are typical features for complex blog engines. Full-blown CMSes often add features like workflows, extensibility frameworks, versioning and whatever.
Dissecting Valio’s content
For Valio, we had two content sources: First, the actual database content derived from Valio systems (recipes, product information) and second, content produced and entered on the site level. Most pages rely mostly on either of the sources, but almost all contain pieces of the other type.
Case 1: The Recipe page
Let’s look into a typical recipe page with some annotations first (click for larger size):
There are four distinct regions marked in the image.
First, we have elements from the page template, the actual HTML. This contains everything that is not in a box in the image: There are fragments of template HTML here and there. For us, the template is technically a set of recursively contained MVC views and partial views. They take some structural metadata as a model and thus render the correct elements (including breadcrumb paths, possible highlight elements and so on).
Second, there is the recipe itself. In the Valio case, this is a typical example of “application data” – the recipes are imported from an internal LOB system designed specifically for recipe maintenance, and the actual recipe data has minimal editing tools on the site; the exception is the users’ ability to create and edit their own recipes, but that’s a slightly different scenario – and at any rate, it is a way to modify the business data, not maintain the site content.
Third, there are the recipe links on the right side of the page. These links are definitely a spot for maintenance: content editors are free to customize whatever is shown with each recipe. However, due to the volume of the data, hand-crafted maintenance cannot be the only option. Thus, we generate appropriate links automatically from business data whenever there are no more specific requirements. This is clearly an example of a CMS-style function, although with some business understanding thrown in.
Fourth, there is the commenting feature. This is standard user generated content, and most definitely a CMS-like element.
All in all, recipes are a very app-like thing from a content perspective, although with some CMS elements added, Consider this in the context of your own development work: How would you add features like this (commenting support, link generation, ability to customize the highlights)?
Before looking at our approach, let’s slice down a totally different kind of page.
Case 2: A product article
Articles represent the entirely other extreme from recipes: They are not apps in the sense that their content is entered mostly on the site.
Now, look at the article page image – a typical, albeit a very short, example of its kind. You’ll notice a couple of things: Basically, the page has a template (the unboxed parts). Also, it has two boxes, columns – or let’s just call them zones.
Why zones? Well, if you look closer at the article page (click to get a larger picture), you’ll notice that the zones are split by dashed lines. Those dashed lines represent individual fragments of content. We call them widgets.
Now, this sounds awful lot like a CMS – perhaps even SharePoint with its Web Parts and Web Part Zones. In fact, our CMS functionality is very similar. We have a couple of dozen different widgets, and you can drop and rearrange them into zones. Widgets can also be parameterized – some only trivially, others extensively.
Let’s quickly run over the widgets on the article to the right. On the main content zone, the first one is a picture widget. You can simply attach an image or several to it, and create either a static image or a gallery. The introduction text is just a text widget (very much a DHTML editor on the administrative end), but set to use a layout that renders the default text with the intro font.
Below that, there’s a short run of text – it’s another text widget, this time with another layout. And further down, there’s yet another widget: a recipe highlight. This one shows a specific, predefined recipe with a largish image and the key ingredients. The layout for the recipe highlight has been defined for the site, but the data is pure business data, not designed or edited for the site.
On the right-hand side, there’s a Link widget (the link to the “Piimät” category), an Article Highlight widget set to show three highlights – some of them may be editor-customized, while the rest are automatically filled by metadata-driven searches. Then there’s another recipe highlight, but with a very different layout from the one in the main zone. And finally, there’s a three-item Product Highlight widget.
The building blocks finally take shape
After explaining this all, let’s look at the big picture. Our key concept is a resource – you might more easily grasp it as a page. Each resource has a URI and some content.
Ok, but how is that content laid out? Each resource also has a type, and we have a few dozen of them. The most understandable ones are those like Recipe, ThemeArticle, Product and FrontPage. Each of these types defines a template, which consists of the two things: an HTML template that defines the raw markup plus the available zones, and a default set of content (prepopulated widgets in zones etc.). In addition to the template, the resource type also defines the code needed to execute a page – in practice, a controller.
A resource template contains a set of widgets for a typical layout scenario, but often writers will create additional widgets to flesh out the article: they’ll want to include sidebar elements, perhaps use product galleries, embed video or whatever.
App-like resources such as recipes are different. First of all, these resources are typically born when an integration task creates them. Suppose Valio devises a new recipe for a meat stew. As they enter it into their recipe database (an operative system beyond our control) and the publication time passes, the Recipe resource is automatically spawned. The resource is populated with the reference to the business entity (The New Stew), and product data such as ingredients and preparation instructions are properly shown.
But that’s not the end of the story. Even with these relatively self-sufficient app-like pages, the page still has widgets. Although the content editor only has limited influence in the app-driven part of the page, the right columns in particular are open to customization. The templates define these widgets as auto-populating: typically, “find three to five items that match current resource’s metadata”. But using the admin view, the content manager can define a custom search (ignoring the local metadata) or even specify the actual search results themselves. If the admin only wants to specify one thing to highlight, the rest can still be populated through automation.
Auxiliary features
The previously described elements take care of the main content production and editing workflows. Resources enable us to semi-seamlessly arrange together content from varying sources. But there is more to all these resources, and even this list isn’t extensive.
At one end, we have user participation and user-driven content production. For the sake of simplicity, let’s split this into two. First, there is the custom recipe editing, which is a huge topic in itself. In a nutshell, the recipe editor creates the business entities (recipies, ingredients etc.), whips up a new Recipe-type resource and links all these things together for display. The second, more approachable part is everything else: the ability to comment, like and vote on things. We record all this data – as well as popularity information such as view counts – per resource, allow moderation and content blocking on a resource level and so on.
Another additional feature provided by resources is the preview toolset. Each of the resources has a copy of itself created. Only content managers can see it, and it’s called the draft. In fact, you can’t even edit the published resources – you’ll always edit the draft. And then we have two actions to help further: Publish, which replaces the published version with a copy of the draft, and Revert, which replaces the draft with a copy of the public version.
Conclusion
As I write it, the features listed sound simple. In fact, they are, and they make up a reasonable usable CMS. That doesn’t, however, indicate triviality of implementation: there were quite a few difficult compromises made during the design process. From a purely technical standpoint, the system is very incomplete in terms of features and tools. From a practical usability and efficiency standpoint, I think we hit a pretty good medium: we catered to the key business needs without bloating the codebase (or the budget).
In a further post, I will finally cover the topic that originally drove me to write about the whole CMS design issue: the database implementation of resources. Yes @tparvi, I heard you, I just wanted to make sure I can focus on the technical specifics once we get there :-) Up next: Inheritance hierarchies, custom XML columns with NHibernate, and semi-reflected widget construction. Oh yes.
November 5, 2011
· Jouni Heikniemi · One Comment
Tags: CMS, valio.fi · Posted in: Web
What’s new in .NET Framework 4.5? [poster]
.NET Framework 4.5 had its CTP released in Build, and RTM is coming next year. The key improvement areas are asynchronous programming, performance and support for Windows 8/WinRT – but worry not, it’s not all about those new thingies.
Instead of just listing it all out, here’s a poster you can hang on your wall and explore. The ideal print size is a landscape A3. If you want it all in writing, follow the links at the end of this post. Click on the image for a larger version.
[UPDATE 2011-11-16: I have changed the poster to include changes in F# 3.0.]
[UPDATE 2012-03-07: The poster has been updated for .NET 4.5 Beta release. Also, the poster is being delivered to TechDays Finland 2012 participants – the new updated version is equal to the one available in print.]
More information
Check out these links:
- Build presentation on .NET 4.5
- List of new features in ASP.NET
- MSDN articles on .NET Framework 4.5 changes
If you prefer to have the poster in Finnish, we have published it on the ITpro.fi Software development expert group site.
Any feedback is naturally welcome, and I’ll make a reasonable effort to fix any errors. Enjoy!
October 29, 2011
· Jouni Heikniemi · 38 Comments
Tags: .NET 4.5, infographics · Posted in: .NET
A lesson in problem solving: Never assume the report is correct
A while ago, a colleague of mine reported that our OData services were functioning improperly. I fell for it and started looking for the issue. I never should have. Not that soon.
“The OData services don’t seem to expand entities properly when JSON formatting is used.”
I was like “Huh?”. We had a bunch of OData endpoints powered by Windows Communication Foundation Data Services, and everything had worked fine. Recently, the team using the interfaces switched from the default Atom serialization to JSON in order to cut down data transfer and thus improve performance. And now they’re telling me that entity expansion, a feature very native to the OData itself, is dependent on the transportation format of the data. Really?
The alarm bells should have been ringing, but they were silent. I went on and found nothing by Googling. Having spent a whole 15 minutes wondering about this, I then went on trying it myself. Since WCF only provides JSON output through content negotiation, I had to forge an HTTP header to do this. So I went on typing:
PS D:\> wget -O- "--header=Accept:application/json" "http://....svc/Products/?$expand=Packages"
And to my surprise, the resulting JSON feed shape really did not contain the expanded entities. Could it be that .NET had a bug this trivial? Baffled, I was staring at my command line when it suddenly hit me.
Can you, dear reader, spot the error?
The problem is that the expand parameter doesn’t get properly sent. See, I’m crafting the request in PowerShell, and to the shell, $expand looks like a variable reference. It then gets replaced with the value of the variable (undefined), resulting in a request to “http://…svc/Products/?=Packages”. No wonder WCFDS isn’t expanding the entities! Of course, we don’t see this with Atom, since we typically do Atom requests from browser, which doesn’t have this notion of a variable expansion.
So I run up to my colleague to verify he wouldn’t be falling victim to the same misconception. He was issuing the request from bash shell in Mac OS X, but variable interpolation rules for bash are roughly equal to PowerShell, so he was seeing the same issue. So everything actually worked exactly as it should, we were just asking for the wrong thing.
If I had tried removing the –header part from the request, I would instantly have spotted that the expansion didn’t work with Atom either, but I didn’t. Why? Because I was paying too much attention to the problem report’s JSON part, thinking the expansion for Atom works automatically, and neglecting to check the connection between the two. Next time, I’ll be more analytic.
October 23, 2011
· Jouni Heikniemi · No Comments
Posted in: General