Scott Guthrie moves on to lead the Azure Application Platform team

imageWhat could be important enough to interrupt the flow of posts on practical project experience? It is the reorganization that will throw every .NET developer’s favorite übergeek to a new position.

Yes, Scott Guthrie is leaving his post as the head honcho for many things such as IIS, ASP.NET, Silverlight and a slew of other products. His new task? Make sure the developers love Azure and get a solid development story on Microsoft’s cloud platform.

What does this mean?

  • ScottGu’s blog will probably focus more on things related to cloud development (including private cloud). The division he’ll be working in delivers database services, BizTalk, AppFabric, WCF and workflows.
  • There will have to be new champions for his previous products. Perhaps not directly in the terms of organizational position, but in the terms of being the public face for said products. Scott Hanselman and others will probably come to the rescue web-wise.
  • Somebody else will probably keynote MIX12. Then again, PDC11 will probably see even a more empowered ScottGu preaching the Azure religion.

One thing I’m a bit worried about is the openness development that has been set well into pace in Guthrie’s previous organization. Co-operation with the jQuery team, integration of new open source technologies into the Visual Studio / .NET mix, growing support for releasing previously closed source, very open and straightforward relation to the developer scene… All this has been highly valuable.

I’m sure the Gu will take much of the goodness to his new org, but I’m less certain if the goodness will remain in the old one. At the same time, all this makes sense: ASP.NET and Silverlight are very, very mature compared to the development story on Azure. It’ll be interesting to see his impact over the coming months. Also, given the somewhat inherently closed nature of the cloud, the next steps of the openness story are worth looking forward to.

May 3, 2011 · Jouni Heikniemi · No Comments
Tags: , ,  · Posted in: .NET, Cloud

Valio.fi deep dive #2: Code review policy

In this second episode of my postings on valio.fi, I’ll look deeper into the concept of code reviews in the project.

Pre-post note: No, I’m not following a logical order with these. Rather, I’ll try to answer the asked questions as soon as I can while still maintaining a readable approach to the project details. Front-end posts will be interleaved with back-end magic, and there will be a dose of project management sauce inbetween. If you want to hear about something, ask for it in the comments.

In a project with numerous developers – in our case a scrum team of nine – the question of spreading code knowledge and ensuring certain quality bar tends to arise. There are numerous ways to tackle this, but we picked two principal approaches: Shared physical working space and code reviews.

Peer reviews vs. architect reviews

imageThere are basically two alternative approaches to picking the reviewer. Either you have a reviewer/architect role (which may be rotating, but often tends to stick), or you can have a system of free peer reviews, i.e. anyone can review anything as long as they feel sufficiently competent about it.

Wanting to emphasize the equality of team members, we naturally chose the peer review model. However, few teams have a totally balanced review graph. Usually some people end up reviewing more code than others. This is partially a function of seniority, but also dependent on character, preferences and random factors (such as knowledge of a particular tricky subject).

Ours wasn’t balanced either. A couple of persons ended up being a part of most of the reviews. Still, we found the system to be fairly optimal: it did distribute a lot of knowledge, everyone had their code reviewed by several others and everyone reviewed somebody’s code.

Review guidelines

As with any project with more than a few people and a duration of months or more, writing down the intended methodology felt wise. Thus, we did have written review guidelines, although they were just that – common sense was allowed to rule over these idealistic methodology statements. Here is a summary of our guidelines with some examples and clarifications in parentheses:

Changes that must be reviewed

  • Changes to commonly used base classes
  • Changes to tool libraries (defined as a group of Visual Studio projects)
  • Changes which alter the architecture (add new layers of abstraction, create new dependencies) or institute new technical vocabulary (e.g. introducing the concept of “distributed cache eviction handlers”)

Changes that are recommended to be reviewed

  • Changes which touch critical, bug-sensitive code paths (O/R mapping internals, request routing, complex logic)
  • Changes which seem incorrect or illogical by the first reading of the code
  • Changes whose unit test coverage is insufficient but hard to improve (I’ll get back to this in a later post)

Rules of play

  • The author of the code makes the first call on whether to request review. Anyone else in the team may also request a piece of code to be reviewed if he feels it necessary.
  • The author may request the review from one or more people.
  • Anyone may review, but authors should usually pick the one most experienced in the domain of the change.
  • Reviewers can ask for additional reviewers if they so feel.
  • Reviews should be handled in a speedy manner. A typical review request should be completed in a working day.
  • The author must answer any comments or questions raised during the review in a speedy and prudent manner, but it is ultimately the author’s choice to either honor or ignore them. Should significant disagreement rise, the team will solve it together.

    Some notes on the guidelines

    These rules, as written above, were applied at the start of the project. During the final few sprints, we relaxed the guidelines and focused on shipping.

    No, wait, maybe that was what you expected to hear. Actually, we tightened the rules a bit for the final three sprints (six weeks) or so. Mainly, we merged the two first headings; essentially, we started to require reviews on all non-trivial changes to the code base. We also pushed this to our project management system; a sprint backlog item was not marked complete until it had passed review (for whatever definition of “passed”; we still allowed the author to decide whether review comments warranted code changes or not).

    As you can see, we relied a lot on code authors. We allowed everybody to request review from anyone, meaning that we didn’t – nor did the project management – get involved in how much time was spent in reviews. And quite some time was spent, although we didn’t log any specific numbers. Personally, I spent almost half of my time reviewing code towards the end of the project, although admittedly I was one of the most active reviewers.

    Forced vs. recommended: Before or after checkin?

    There are basically two stages when you can do a review. Many open source projects take a fairly strict path: Only a select few people can commit or authorize commits to the source control, thus essentially creating a tight net for reviews. Often, in the largest projects, nobody commits anything without somebody else looking at it first. This makes sense when nobody can know the code authors.  The other alternative is to allow free commits, but review before releases or other milestones.

    We were totally in the post-checkin camp. Everyone could commit anything, and we had no system in place to track if people actually did request reviews (until late in the game when we made it part of the completion criteria).

    image

    If you have the right culture, reviews are free

    So yeah, we had a very author-controlled approach to reviews. But we also had a very open code quality culture, and it was not uncommon to question the validity of somebody’s approach over lunch. The debates could get quite heated, and we often spent a lot of time on issues that didn’t, in the end, have enough weight to warrant such use of time.

    However, it all served a purpose. Our somewhat excessive quality discussions created a culture where writing bad code – or skipping reviews – wasn’t particularly lucrative. You never got publicly lambasted for a stupid bug in unreviewed code, but everybody felt better collectively owning code that had passed a review: a bug in well-reviewed code was a team mistake.

    The reviews definitely had a time cost. Their positive impact is very hard to measure, given that we couldn’t know what the amount of bugs would have been without the review net. We caught some in the reviews, but probably prevented ten times more by conveying knowledge and setting examples during the review process.

    A couple of weeks after shipping the site, many people kept wondering about the low amount of bugs discovered right after the launch. This sentiment, no matter how unmeasurable, is one of the key indicators of our success in terms of quality.

    A perfect success?

    Far from it. First of all, our front-end code wasn’t nearly as meticulously reviewed, for a variety of reasons (although the same attention to quality did apply, it wasn’t as collective). There were definitely scenarios where reviews could have saved us from some front-end issues.

    Then, there was the question of review culture. While we were mostly quite successful with it, we could’ve been a bit more stringent with the adoption at the early stages. Given that the team was formed from three organizations (with three different review backgrounds), we should have made everything clear from the get go. It would have been a step away from self-organization, but we would have set a meaningful default, then allowing the team to find its own path. At the very least, review completion should have been a part of the “done” criteria from a much earlier point of time in the project.

    All in all, reviews made a big difference in the project. One more thing we could’ve done better is follow-up: We should have gone through common review comments more often and intentionally disseminate information about the issues on a weekly basis. We didn’t have a process for that. In a bigger team and a more intense schedule, this would have been even more important.

    imageUp next: Tools

    That’s it for the review methodology. The next post will be about the “how” of reviews, including the tools (Crucible). At a later stage, I will return to the topic of reviews by explaining a few of the common issues we found and giving some ideas on how to tackle those issues.

    April 28, 2011 · Jouni Heikniemi · 3 Comments
    Tags: ,  · Posted in: Misc. programming

    Valio.fi deep dive #1: Understanding our JavaScript philosophy

    In this first part of my series of postings on the Valio.fi project, I’ll discuss some design aspects of our use of JavaScript. We outlined these principles in the TechDays presentation, but the guidelines are worth repeating in print as well.

    What drove us to this?

    imageFirst, let us reiterate the project background and goals.

    When the concept of the site was born, high-fidelity user interface and visual appeal were high on the priority list. As you can see by browsing the site, particularly recipe search and articles, a hefty amount of work has been done to make the content look good. This includes contributions from artists, copywriters and photographers.

    Once we had all this good-looking content, the challenge moved on to the viewing experience: How can we make it as enjoyable as possible for the user to browse the content? There had to be that certain feeling of abundance, and some of that comes from speed: new content has to be available so fast you think you can never consume it all.

    But it wasn’t just viewing the content, it was also about entering it. For example, the recipe editor. Few home cooks have written down exact instructions on how to prepare the dishes they have invented. Sure, many write down the ingredients, but it’s a real effort to produce a concise set of instructions, in followable order, on how to reproduce the dish.

    What kind of user interface would provide a reasonable approach to this? A set of text boxes would work, but be very clumsy in terms of reordering entered content. A textarea might be acceptable, but then dividing the instructions into clearly printable steps would involve splitting on line feeds or other semi-technical tricks, likely to mislead the user or produce unwanted results.

    Client-side scripting shouldn’t be your first option

    Both examples above added up to this: We wanted faster search-based browsing, which called for a client-side implementation. And we wanted drag-and-drop rearrangeability for recipe entry, as other options would simply have been too unusable.

    After all the debate, there was a valid case for a JS-driven user interface. It wasn’t our first idea, nor was it an obvious choice to make. But in the end we did make that choice, and again a few more times along the way. Here are some of the aspects that affected our decisions:

    • Why do we want this? (see above)
    • Would it be possible to implement a reasonably usable non-JavaScript version?
    • If yes, would it involve a reasonable amount of work compared to the user benefit?
    • What proportion of users would suffer from the lack of JavaScript support?

    Different cases, different reasons

    I’ll present a few use cases for JavaScript on the site and try to give an idea on the arguments behind the decisions.

    image

    1) Features where JavaScript support is completely optional, just providing a better UI

    For example, login to the site. The login function is available in the footer bar – if you don’t have JavaScript, the login function just acts as a link to a login page with a standard HTML form. If JS is enabled, it renders a “toast” window, allowing you to log in within the context of the page.

    This kind of use of JavaScript is trivial to defend: it harms nobody, so the only real effect is the effort of writing and maintaining it (and testing the non-JS version, which tends to be forgotten). The vast majority of the site’s JavaScript usage falls under this category; little tricks to make the site look better and work smoother.

    2) Features where a reasonable JavaScriptless implementation is not doable

    The recipe editor is a prime example here. While it would be possible to conceive a UI that would work without JavaScript, it would be a stretch to call it usable. The recipe editor requires a lot of complex data input and fairly sophisticated input guidance. For example, we know the available units of an ingredient only after the ingredient has been selected. The selection list is long enough to make <select> population  quite unwieldy (not to mention the UI of the selection itself!).

    imageWhile there definitely are quite a few users without JavaScript support, what percentage of them would be likely to use JavaScriptless user agents to add recipes? We deemed this portion of users small enough to ignore. Sounds harsh, but it is, ultimately, a business decision.

    Many projects fear these decisions, and thus paralyze themselves with JS improvements: “if we cannot build a complete, perfect graceful degradation story, we can’t use JavaScript at all”.

    3) Features where the JavaScriptless implementation is vastly simpler in terms of features

    imageSearches are the most prominent example here. We provide a decent search without JavaScript, but it’s far from the visual glory of the JS version. It is not even functionally equivalent. For example, the JS version allows multiple search criteria from the same category (for example, you could select to search for recipes with meat AND cheese), while the pure HTML version only enables searching by one option in each category.

    Such restrictions were not always technically necessary. We could have created the search interface with lots of multi-select controls or massive checkbox groups, and thus replicate the user experience. However, we often found the simpler option more usable. For example, without JavaScript, we could not show real-time numbers of matches for each filter. Thus, the user would essentially be searching blindly, perhaps ending with zero results more often than desired.

    4) Features where we just didn’t want to bother with the scriptless implementation

    image

    Finally, there are the features which could have been implemented without scripting but were not – usually simply because we didn’t have the time or considered something else more important. Perhaps the most visible example of this was the recipe scaling feature, allowing you to change the number of portions for any recipe.

    Superficially, it’s just two buttons that scale the recipe up or down. Behind the scenes, a whole lot of things happen when scaling. We use a JSON request to get the scaled recipe ingredients from the backend and then render it on the page using jQuery templating.

    We could relatively easily add an HTTP GET based approach to scaling, providing links for +/- and then using querystring arguments for conveying the portion count. That might be something we’re looking at later on.

    TANSTAAFL

    All the sweetness from JavaScript doesn’t come without it problems, though. In exchange for bypassing some of the HTML basic form controls, we hit the following issues:

    • Accessibility degradation; impactwise ranging from mild (drag and drop without a mouse?) to severe (you just cannot scale a recipe without JavaScript support)
    • More code means more bugs; to provide a JavaScript-based UI with pure HTML fallback often means coding some things almost twice.
    • Increased complexity in terms of testing; it’s very difficult to get all the happy and unhappy paths tested, both for JS and HTML.
    • Increased attack surface from a security standpoint; while JavaScript itself rarely presents a security hazard, the additional APIs must also be checked for security issues, DoS opportunities and whatnot.
    • Considerably higher skill demands for team members; few backend developers understand the requirements for a decent JavaScript API, and while learnable on the go, it is expensive.

    All in all, there are quite a few hurdles that need to be considered. Accessibility is a very relevant and real question, but it’s a business one. We wouldn’t have that problem in an ideal world, but since few projects have the resources to actually deliver fully usable pure-HTML experiences, most will practically have to find a compromise.

    Unacceptable reasons for dissing JavaScript

    While at it, I want to mention a couple of common reasons I hear for not using client-based technology.

    • “We can’t make it work the same for everybody” – probably true, but it’s also a cop out. Check with your business owner if that’s really what you need to do.
    • “We can’t make it work the same across all the browsers” – partially true, but if your understanding comes from times before jQuery et al., do refresh yourself. Getting CSS layout to work across browsers is much harder.
    • “JavaScript is not a proper programming language” – this is a colored statement coming from somebody with a limited definition of “proper”, typically involving static typing and class-based inheritance. Rest assured, the expressiveness of JavaScript will not be the limiting factor in your project.
    • “JavaScript doesn’t perform well enough” – True in a very specific sense, but an insufficient reason for complete dismissal of client-side scripting. There are scenarios where scripting performance can kill your site. Large DOM manipulations in particular can be a headache on older browsers. You need to test, test and test. But with some work and skill, you can do a lot even with IE 6 – and most uses of JavaScript will encounter no performance issues whatsoever.

    Conclusion

    I want to end the story with a concluding note: There is a lot of power in JavaScript. As usual, not everything works – or is easy to do. But if you are building a web solution where the user experience matters, you will be doing yourself a disservice if you didn’t even think about the possibilities provided by client-side programming on the web.

    For a larger project, you would do well to devise a consistent, premeditated decision making framework for evaluating your JavaScript scenarios. Each instance of JavaScript use should provide reasonable benefit to offset the technical cost and possible accessibility issues. With that design-time check in place, client-side scripting can improve your site drastically.

    April 18, 2011 · Jouni Heikniemi · 2 Comments
    Tags: ,  · Posted in: Web

    MIX’11 keynote summary, day 1

    It’s time for MIX again! Microsoft’s number one conference for web and phone enthusiasts kicked off in Las Vegas. The first keynote day covered web development, the second one will zero in on phone thingies. Here are the highlights from the first day’s keynotes.

    Internet Explorer

    imageDean Hachamovitch kicks off the keynote – and he’s wearing a “TEN” T-shirt hinting at IE 10.

    • Examples of real world HTML 5 applications: Foursquare Playground, SVG animation, Director’s Cut (a tool you can use to create custom Bon Jovi music videos), World’s Biggest Pacman, …
      • Lots of boasting on IE 9’s performance, but little else. Still, it’s been a while since a new high-end HTML site recommended a Firefox user to pick up IE 9 (as Foursquare Playground does).
    • IE 9 patterns: A segment of keynote that focused on celebrating IE9’s developer readiness
      • http://html5labs.interoperabilitybridges.com/ – Microsoft’s web site on emerging
      • In the future, platform previews will appear between 8-12 weeks instead of 8 right now; Microsoft feels this will provide more efficient changes and ability to hear the community feedback.
    • Looking forward: IE10 has been developed for three weeks now
      • Improved CSS3 support (visual effects, columns, grid layouts, …). High speed is still a priority, and heavily demoed.
      • Platform Preview 1 is now available for download
      • IE10 demos were run on an ARM machine with 1 GB of memory, demoing the capacity to run (fast) on an ARM machine.
    • PDC 2011 will be held on 13th-16th September in Anaheim, California (the PDC site isn’t up to date yet though).

    Developer tooling and server end

    Scott Guthrie on the stage, turning the focus back into the server end.

    • Now shipping ASP.NET MVC 3 Tools Updateimage
      • New project templates, including a “use HTML5” switch, NuGet packages pre-installed
      • Entity Framework 4.1 (including code-first capabilities)
      • New jQuery version 1.5 shipping
      • Modernizr – another open source library shipping with Visual Studio through NuGet. This one makes it easier to use HTML5 and still have a working down-level experience.
      • New scaffolding support: Creating a code-first model also creates controllers, views and whatnot.
      • Demo: Creating a simply administrative interface to a podcast. The highlight of the demo was definitely building a code-first model, which then resulted in a generated database, controllers and views, including database constraints and values. Not useful everywhere, but certainly helps in quite a few scenarios.
    • WebMatrix demo: Building a front-end. Demo included buying a web site template from TemplateMonster and using it to whip up a quick web site with WebMatrix, including lots of easy helpers for social features, packge-based helper installation and other stuff.
    • Orchard (an open-source CMS to which Microsoft is contributing code): image
      • Again, returning to WebMatrix, and building a site using Orchard downloaded from the Web Gallery.
      • Version 1.1 now available; the new version is definitely much cleaner than the previous ones.
      • Also heavy demoing of Orchard modules, downloadable and installable pretty much like WordPress plugins. Potential new business opportunity, once all the Microsoft-savvy developers get into WordPress-style development?
    • Windows Azure: Releasing new versions of Access Control Service, Caching and Content Delivery Network, and a feature called Traffic Manager. Not much detail in the keynote though, but look forward to the coming days.
    • Umbraco project founder on stage to discuss the CMS:
      • 10 000 active developers around the world – not bad for an ASP.NET open source project
      • Touting Windows Azure support: Umbraco supports automatic scaling, allowing the admin to specify tolerances. Umbraco will then adjust the number of Azure nodes needed to serve the site properly. Now downloadable: Windows Azure Accelerator for Umbraco

    April 12, 2011 · Jouni Heikniemi · One Comment
    Tags: , , , , , ,  · Posted in: .NET, Web

    Out of the dark

    It’s been a silent couple of months. Offbeat Solutions has been heads down programming, debugging, optimizing – and then, finally shipping. Now it’s time to look back.

    imageIn TechDays Finland 2011 we finally talked about our latest customer project, the www.valio.fi web site (slides available, in Finnish). In case you’re not from around here, Valio is the Finland’s leading company in dairy products and a consumer brand of legendary caliber.

    Since the presentation on 31st March, we then finally made the site live on Saturday 2nd April. If you can read Finnish and like food, go check the site out at www.valio.fi!

    What’s inside the box?

    Essentially we – including our partners in crime, Valve Branding and Appelsiini Finland – delivered an ASP.NET MVC 3 application with quite a few business system integrations and a SQL Server database. Woohoo – that’s not really the exciting part. But we truly did deliver something that’s not the usual user (or developer) experience, so what’s different?

    In my opinion, there are six aspects that differentiate the Valio site and the implementation project from most cases developers get sucked into.

    1. User experience. We use modern browser capabilities to make the user experience smoother, faster and visually more appealing. And we do mean so much more than slapping an UpdatePanel here and there.
    2. Platformness. Our task was to implement not just a web site, but also a platform for many future digital marketing endeavors. For this purpose, we expose a whole lot of APIs – and use them ourself.
    3. Automated testing. Although we are nowhere near full test coverage, our test suite compares favorably with most web projects.
    4. Manageabilility. We provide decent tooling to manage the site, reducing the need to type custom SQL queries. Yeah, we have PowerShell commandlets.
    5. Scale. Valio.fi isn’t huge, but it’s running on a cluster of web nodes. It has all the challenges of being on a cluster, most of which never occur on a single-node web site.
    6. Project methodology. We had imperfect Scrum and all, but we had successful teamwork with almost 20 people involved daily – and with relatively few communication issues.

    Coming up: More info

    Thanks to the openness of Valio and other involved parties, we enjoy pretty broad liberty to discuss the technical specifics of the project. Offbeat Solutions wishes to use this opportunity to disseminate information and best practices on building modern web sites with ASP.net.

    During the coming months, we intend to blog on many of the subjects mentioned above. If you have feedback or want to know something about the project, please do leave a comment!

    April 8, 2011 · Jouni Heikniemi · 6 Comments
    Tags: ,  · Posted in: .NET, Web

    Finnish municipalities list published in OData!

    In an attempt to contribute something back to the community, Offbeat Solutions has published a list of Finnish municipalities, regions, electoral districts and whatnot. And of course, in the modern spirit, using the OData protocol.

    Check out the actual material at http://www.offbeat.fi/kunnat.aspx. The description of the data is only available in Finnish, but the data structures themselves are in English – you can also browse yourself just into the OData endpoint.

    In case you can read Finnish and are in need of an OData tutorial, there are three new ones available on the ITpro.fi site:

    Enjoy!

    January 14, 2011 · Jouni Heikniemi · 2 Comments
    Tags:  · Posted in: .NET

    NullReferenceException from NHibernate in a WCF Service

    Don’t you hate it when you get an NullReferenceException? NHibernate 2.1.2 throws one if you have an WCF Service that accesses the NHibernate context. If you get one from NHibernate.Context.WebSessionContext.GetMap(), read on…

    It’s really simple but easy to forget – NHibernate’s web session models requires you to have HttpContext available. That particular mentioned method calls HttpContext.Current.Items[…], causing a NullReferenceException when the HttpContext.Current returns null. Quite logical, but since you rarely call WebSessionContext directly, it’s also easy to miss.

    The fix is a very straightforward one: Make your service run in ASP.NET Compatibility mode. As described in MSDN article on WCF and ASP.NET coexistence, WCF services have their own processing pipeline, and also have a null HttpContext. However, you can coerce your WCF requests into the ASP.NET Model by enabling compatibility mode. You do this by tagging your service class with an attribute:

    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)]
    public class MyService : DataService<MyDataSource>

    Once done, your service should execute fine. However, you might also get an InvalidOperationException with the following message:

    The service cannot be activated because it requires ASP.NET compatibility. ASP.NET compatibility is not enabled for this application. Either enable ASP.NET compatibility in web.config or set the AspNetCompatibilityRequirementsAttribute.AspNetCompatibilityRequirementsMode property to a value other than Required.

    This would indicate a need to add the following to your web.config:

    <system.serviceModel>
        <serviceHostingEnvironment aspNetCompatibilityEnabled="false" />
    </system.serviceModel>

    Hopefully this will then finally get you back on the track. Smile

    December 21, 2010 · Jouni Heikniemi · One Comment
    Tags: , ,  · Posted in: .NET

    Razor + Visual Studio bug: don’t paste whitespace in code

    Just a quick note of a bug my wife discovered when working with some Razor code.

    If you have ASP.NET MVC 3 Release Candidate installed, you can trigger this. Take a spot where your view file has a C# expression (such as @View.Title), and paste some code after that expression. An example of the code might be “.Trim(' ')”.

    Pasting that would have you expect a result of @View.Title.Trim(' '), but instead you end up with @View.Title '').Trim('.

    The problem is that pasting a C# segment with whitespace in it causes Visual Studio to mutilate the code strangely. This is particularly apparent when pasting heavy method calls with spaces between arguments. Well, yet another reason to avoid code in your Razor files!

    The Razor team confirmed via email that this is a bug in the Release Candidate and will be fixed in RTM.

    November 30, 2010 · Jouni Heikniemi · No Comments
    Tags:  · Posted in: .NET, Web

    SANKO meeting on ORM tools, 2010-11-24

    Finnish .NET Users Group SANKO will be meeting on 24th November in Leppävaara, Espoo, Finland in order to discuss the concept and details of object-relation mapping. In case you live in the Greater Helsinki Area, check this out!

    The agenda is as follows (all sessions in Finnish):

    13:00 – 13:15

    Welcome & What’s this SANKO? / Jouni Heikniemi

    13:15 – 14:00

    Introduction to OR mapping: What and why? / Sami Poimala

    14:00 – 15:00

    Entity Framework 4 / Pasi Taive

    15:00 – 15:15

    Break

    15:15 – 16:00

    ORM under the covers – a deep dive on NHibernate / Lauri Kotilainen

    16:00 – 17:00

    Panel Discussion: Most common problems & the best practices for solving them

     

    If you’re interested, register before Monday via the Microsoft site. On behalf of the SANKO team, welcome!

    November 19, 2010 · Jouni Heikniemi · No Comments
    Tags:  · Posted in: .NET

    Outputting partial elements with ASP.NET Razor

    The new ASP.NET MVC 3 also ships with a totally new option for a view engine, Razor. I’ve now been using Razor in a project for couple of months, and yeah, it’s good – but not entirely without problems. Here I’ll cover one of them.

    Imagine a scenario where you’re outputting something that may or may not be a link, depending on the conditions. You’d be tempted to write something like this:

    @if(linkUri != null) { <a href='@linkUri'> }
    Some text
    @if(linkUri != null) { </a> }

    … only to find out that it doesn’t work. The error message you’ll be getting is probably “The if block is missing a closing "}" character.  Make sure you have a matching "}" character for all the "{" characters within this block, and that none of the "}" characters are being interpreted as markup.” Alternatively, you could get “The foreach block is missing a closing “}”…” or something similar, depending on your enclosing elements.

    That’s not really enlightening, and it turns out Razor is smarter than you’d think – and as usual, smart software has a flip side.

    You could just balance yourself…

    Razor is intended to be written with elements fully enclosed within the C# blocks. Therefore, you might rewrite this as:

    @if(linkUri != null) { <a href='@linkUri'>Some text</a> }
    else { <text>Some text</text> }

    However, such an approach repeats “Some text”, which may be a lengthy expression or even an entire block of HTML, making this rather clumsy. Note that you need the <text>…</text> element to ensure that the else block content gets parsed as HTML instead of C# – the text element will not be present in the resulting HTML, and would be unnecessary if you had any actual tags in there.

    Another workaround would be to use the WriteLiteral method and enclose the tags in C# strings, disabling Razor’s parsing logic:

    @if(linkUri != null) { WriteLiteral("<a href='" + linkUri + "'>"); }
    Some text
    @if(linkUri != null) { WriteLiteral("</a>"); }

    … but what you really want is @:

    Of course, this is a common enough scenario to warrant a syntactic helper. It’s called @:, and it means that the following line is markup, no matter what tags occur. This allows the logic to be written as follows:

    @if(linkUri != null) {
      @:<a href='@linkUri'> 

    } Some text @if(linkUri != null) { @:</a> }

    Unfortunately, as you can see, the @: has a line scope and thus necessitates a few additional line feeds. Despite that, this is rather readable and scales well to complex instances of “Some text”.

    Thanks to Andrew Nurse at the Razor team for helping me out with this!

    November 17, 2010 · Jouni Heikniemi · One Comment
    Tags: ,  · Posted in: .NET, Web