Friday, 12 February 2010

I wanted to write some more sophisticated asserts in the Genome tests where the exception message is also tested for the error code (e.g. if the message contains #GENxxx).

In newer versions of NUnit there is an enhanced version of the ExpectedException attribute, where you can match the exception message in multiple ways. For example you can use a “contains” match to see if the right error code was mentioned in the exception message.

        [ExpectedException(typeof(GenomeException), ExpectedMessage = "GEN01253", MatchType = MessageMatch.Contains)]

As we were using an old version of NUnit (v2.2.6) I tried to upgrade to v2.5.3.9345 to make use of the new features. As soon as I did this my R# v4.5 failed to identify the ExpectedException at all.

To make it clear: even if I leave the ExpectedException attributes as they are (no new parameters like MatchType) the test runner does not recognize it, so all the existing tests using the ExpectedException attribute fail in the Resharper running environment after the upgrade of NUnit.

As you can see this is a major problem with R# that can be a show stopper (either to upgrade NUnit or to use R# for executing the tests), but it turns out that you can upgrade to R# v4.5.2 so that it supports the ExpectedException attribute of the newer NUnit versions *again*. I have found the info here: http://www.jetbrains.net/devnet/thread/281286

After installing the new version it seems to work again.  But It also turns out that the new parameterization with MatchType is still not supported. Trying the sample above the runner ignores the MatchType and claims that the message is not the same as specified. So you just get back the "basic" functionality of ExpectedException. Jetbrains “refused” to support this properly in the v4 branch and they promise improvements in v5.0 only: http://www.jetbrains.net/jira/browse/RSRP-43833

Get rid of the ExpectedException for the future anyway?

I'm not sure if this common sense, but there is a new way to expect the exception with NUnit. You can use the Assert.Throws method, where you pass the code that is supposed to throw the exception as delegate. This way you can explicitly mark "where" you expect the exception (while with the attribute it is not visible).

Also, you get back the exception in a strongly typed way and you can just write normal asserts about the exception. E.g. I can assert on the ErrorCode of the GenomeException. It is much cleaner compared to a String.Contains expressed in the ExpectedException attribute.

Another benefit is (considering the upgrade issues above) that you are more independent from the running environment, because the assertion is expressed as normal C# code. The runner does not need the “extra knowledge” unlike in the case of a special attribute.

[Test]
public void CreateObjectWithDefaultDiscriminatorReference()
{
...
var exception =
Assert.Throws<GenomeException>(() =>
{
Context.Current.Flush();
}
);

Assert.AreEqual("GEN0153", exception.ErrorCode);
Assert.That(exception.Message, Is.StringContaining("type discriminator"));
Assert.That(exception.Message, Is.StringContaining("not set"));
}
}

This is not a reason to rework all the existing tests, but I'd suggest to use this new technique for the future.

Friday, 12 February 2010 16:09:16 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, 05 September 2008
When writing our new messaging framework (GMX) for Genome v4, I ran into an interesting problem with LINQ. My colleague, Sztupi also ran into the same problem at almost the same time, so I thought it would make sense to write about it.

Before describing the problem, let me summarize some not-so-well-known facts about LINQ. If you are experienced with LINQ and the expression trees it uses, you can even skip this part and proceed from “So much for the LINQ overview” sentence to read about the problem I ran into.

When you write a query, such as

from c in customers
where c.City == "London"
select c
the C# compiler compiles it into a method call like this:

customers.Where(c => c.City == "London")

You can even write the Where() call directly as well; you don’t have to use the "from…" syntax. The parameter of the Where() call is a special construct called a lambda expression, which is something very similar to an anonymous method. In fact, sometimes it is an anonymous method.

Now the question is what you want to do with this lambda expression. If you want to filter customers that are already loaded into the memory, you want to have an anonymous method compiled from the lambda. However, if the customers reside in the database or in an XML file, you actually never want to evaluate the lambda as a .NET method call, but rather you want to transform it to SQL or XPath and let the underlying engine execute it. In this case, the anonymous method is not a good option, as it would be very hard to find out from the compiled CLR code that the method wanted to compare the City field to "London".

And here comes the big trick of LINQ. The C# compiler decides during compile time whether to compile the lambda expression to an anonymous method, or to an expression tree initialization code. If it compiles it to expression tree initialization, then during runtime, a new expression tree will be created whenever this Where() method is called, and this expression tree will represent the lambda expression you just described. O/RM engines like Genome can take this expression tree and transform it to SQL.

The only question remains is how the C# compiler can decide whether to compile the lambda to an anonymous method or to expression tree initialization. This decision is done by analyzing the parameter types of the Where() method you are actually about to call. If the Where() method takes a delegate as a parameter, it compiles to an anonymous method, and if it takes an Expression<T> parameter, it compiles to expression initialization.

It is good to know that the LambdaExpression class has a Compile() method, that can be used to compile the expression tree to a delegate. We don’t have a transformation in the other direction however, so you cannot get an expression tree from a delegate.

Genome | Linq
Friday, 05 September 2008 16:16:03 (W. Europe Daylight Time, UTC+02:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, 05 August 2008

We frequently get asked about Genome’s future in the light of Microsoft’s upcoming .NET 3.5 SP1 release, which includes the Entity Framework and related technologies such as LINQ and ADO.NET Data Services (see also the beta release announcement on Scott Guthrie's blog giving a broad overview about the new features).

LINQ

LINQ (already released with .NET 3.5) provides query language capabilities for C# and VB.NET. Many new Microsoft technologies and products by other vendors rely on LINQ. It is crucial to integrate with it in order to stay connected with other technology trends. The distinction to LINQ2SQL needs to be emphasised, as many users confuse the two.
Genome has been fully integrated with LINQ since November 2007 (although we released several preview integration versions from 2006 on).  In fact, Genome was the first third party O/RM to provide LINQ integration. Developers who use Genome are thus not locked out of technology trends related to LINQ.

Astoria

Astoria is the code name for Microsoft ADO.NET data services. It provides a REST interface for any data source that supports the interfaces IQueryable (introduced with LINQ) and IUpdateable (introduced with Astoria). It is not an O/RM, but rather a messaging layer over O/RMs or other data sources.
Astoria’s current release focuses on integrating with Entity Framework, but it appears that its extensibility is still unstable when it comes to other frameworks. Astoria is a great concept, but we doubt anyone is currently using it in production.
We are confident that Genome will support Astoria in the near future (before the end of this year), when integration possibilities have matured and the integration issues on Astoria’s side have been resolved. As with LINQ, developers who use Genome are not hindered from using this technology.

Entity Framework (EF)

Entity Framework actually consists of three major modules:

  • Entity Data Model (EDM): this is an abstraction of a relational model that introduces higher level concepts such as inheritance, composition and associations. Any database ER model can be mapped to an EDM. It also provides a neutral (i.e. vendor-neutral) dialect of SQL. Developers can map their databases to EDM and formulate queries to them in eSQL. EDM exposes “entities”, which are not CLR classes but rather structured data rows with meta data attached.
  • Provider Model: this is an extensibility point of Entity Framework for database vendors, to allow them to adapt eSQL and the EDM (data types, etc.) to vendor-specific database models (vendor SQL and database type systems).
  • LINQ To Entities: this is an object-relational mapping tool that allows CLR class models to be mapped to an EDM. In other words, it maps CLR classes to EF entities.

Genome actually overlaps with LINQ To Entities to a certain degree. Entity Framework itself is much more than an O/RM, as it represents the next level of abstraction for data access on the .NET platform (hence its original name, ADO.vNext). If Entity Framework proves to be useful and is widely adapted by our target customers, we can imagine integrating Genome with Entity Framework by replacing LINQ to Entities and allowing CLR business models to be mapped to EDMs with Genome. This would help our customers benefit from the Genome O/RM API and utilise EDM for other applications such as reporting, etc.

Our main concerns about Entity Framework and Genome’s value proposition:

Technical Overkill

There is the potential that the proposed development model and abstraction required by Entity Framework is overkill for certain applications (e.g. there are three models and all mappings between them need to be managed).

Tools provided by Entity Framework heavily depend on visual designers integrated in Visual Studio to manage the various mapping models and generate code from the models. This is especially the case with large and complex projects that involve large and complex models – which is what Entity Frameworks seems to target. We strongly doubt that relying on visual designers to that extent is a good approach. For example, resolving a merge conflict in the model (as can easily occur in projects with large teams) is not possible with a graphical designer, thus forcing developers to edit models manually.

Version 1 issues

Of course any first version of a product will have some immaturity issues which people usually have to more or less work around. However, since Entity Framework provides a radical and very complex new concept for abstracting data access, the functional completeness of Version 1 is very low compared to what the concept itself covers. The danger of encountering issues that are difficult or impossible to resolve is quite high in Version 1. This can be a particular problem in large enterprise projects, which is of course what Entity Framework appears to target.

The bottom line

The funny thing is that while LINQ2SQL is too simple for many applications, Entity Framework seems to be far too complex for many of our cases.

We are going to continue polishing Genome into an O/RM that we think is sophisticated enough to serve complex enterprise projects while also remaining simple enough to not force over-engineering. We are just about release Genome V4. Working on O/RM for .NET since 2002 has given us quite a lot of confidence in our approach: we balance flexibility and simplicity. We ensure that our customers are not locked out of technology trends on the .NET platform, so we will continue to integrate Genome with new technology concepts introduced by Microsoft in this field. We hope that our position as the first 3rd party O/RM to integrate with LINQ has already proven our commitment to this strategy.

Tuesday, 05 August 2008 17:18:05 (W. Europe Daylight Time, UTC+02:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, 01 July 2008

Microsoft has released .NET Framework 3.5. service pack 1 beta. For all Genome users: be aware that this package rather quietly also contains an SP2 beta for the .NET 2.0 Framework, which cannot be deployed alone. There is not too much information available on its exact contents (see http://readcommit.blogspot.com/2008/05/microsoft-net-framework-20-service-pack.html).

Be warned that it is possible that a Genome schema compiled with the SP beta may not load on a machine without the SP beta (which is usual for production servers), yielding the following error:

SerializationException: The object with ID 40221 implements the IObjectReference interface for which all dependencies cannot be resolved. The likely cause is two instances of IObjectReference that have a mutual dependency on each other.]

We are sorry about any inconvenience caused. This issue may occur with Genome V3.3.4.38 and we are currently investigating if others are affected. We are providing feedback on this issue to Microsoft and hope that it will be resolved with the release - in the meantime, please make sure that deployment packages are generated on a machine that does not have SP1 beta installed!

Tuesday, 01 July 2008 11:30:48 (W. Europe Daylight Time, UTC+02:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, 27 June 2008

Many posts (e.g.: http://blog.deploymentengineering.com/2007/06/dealing-with-very-large-number-of-files.html, http://www.wintellect.com/cs/blogs/jrobbins/archive/2007/10/19/wix-the-pain-of-wix-part-2-of-3.aspx, http://blog.deploymentengineering.com/2007/06/burning-in-hell-with-heat.html) have been written about the problem of adding large number of files to a WIX installer. This problem is the most painful when you want to add content files that do not really have any special purpose, but just have to be there (e.g. code samples or source code packages).

I also struggled with this problem, and finally I found myself creating a small MsBuild tool (WixFolderInclude.targets) that you can include in your WIX project and use to generate file installers for entire folders on the disk. :-) I call it a tool, as I don’t have a better name for it, but it is not (only) an MsBuild target, nor is it a Task. Actually it is a WIX MsBuild extension, but WIX already has a “WIX extension” term, which is something else. So let’s stick to “tool”. 

The WixFolderInclude tool

Let’s see how you can use this tool; it was tested with the latest WIX framework (v3.0.4220), but it probably works with older v3x versions as well. I’m assuming that you are more or less familiar with the WIX and MsBuild concepts. If not, you can grab the necessary information quickly from Gábor Deák Jahn's WiX Tutorial and the MSDN docs.

WIX projects (.wixproj) are MsBuild projects, and you can extend them with other MsBuild property definitions or targets. One option is to modify the wixproj file in a text editor… This is fine, but I like the WIX project to open in Visual Studio, and in this case modifying the project file is not easy. Instead, I usually always start by creating a “Build.properties” file in the WIX project (it has type “Content” so it does not modify the WIX compilation), where I can write my MsBuild extensions. I have to modify the project file only once, when I include the Build.properties file. I usually include it directly before the wix.targets import:

</ItemGroup>
<Import Project="$(MSBuildProjectDirectory)\Build.properties" />
<Import Project="$(WixTargetsPath)" />

But you can directly write into the project file as well, if you don’t use the VS integration.

Let’s take a very simple example: I would like to include two code samples in the installer. They are located in some folder (C:\Temp\ConsoleApplication1 and C:\Temp\WebApplication1) and I would like to install them in a “Samples” folder inside the program files entry of my installed application. Of course both samples contain sub-folders that I also want to include.

To achieve that with my tool,

  • you have to define MsBuild project items that describe the aspects of the installation of these folders
  • you have to define some property to utilize my tool
  • the tool generates some temporary WIX fragment files during compilation (and includes them in the compilation), which contain the definition of the Directory/Component/File structure and a component group that gathers the components generated for the files in the directory structure.
  • you have to include references to the generated component groups in the installation features of your choice in the normal wxs files (e.g. Program.wxs).

So first, let’s create the folder descriptions for my sample. The tool searches for project items called “WixFolderInclude”, so we have to create such items for the folders we want to include:

<ItemGroup>
  <
ConsoleApplication1Files Include="C:\temp\ConsoleApplication1\**" />
  <
WixFolderInclude Include="ConsoleApplication1Folder">
    <
SourceFiles>@(ConsoleApplication1Files)</SourceFiles>
    <
RootPath>C:\temp\ConsoleApplication1</RootPath>
    <
ParentDirectoryRef>Dir_Samples</ParentDirectoryRef>
  </
WixFolderInclude>
</
ItemGroup>

<ItemGroup>
  <
WebApplication1Files Include="C:\temp\WebApplication1\**" Exclude="*.scc" />
  <
WixFolderInclude Include="WebApplication1Folder">
    <
SourceFiles>@(WebApplication1Files)</SourceFiles>
    <
RootPath>C:\temp\WebApplication1Files</RootPath>
    <
ParentDirectoryRef>Dir_Samples</ParentDirectoryRef>
  </
WixFolderInclude>
</
ItemGroup>

As you can see, you can define the set of files to be included with the standard possibilities of MsBuild, so you can include deep folder structures, exclude files, or even list the files one-by-one. In the example here I have excluded the source-control info files (*.scc) from the second sample.

In the WixFolderInclude items, you have to note the following things.

  • The main entry (ConsoleApplication1Folder and WebApplication1Folder) describes the name of the folder installation. The generated component group ID will be based on this name, so you can use any meaningful name here, not necessarily the folder name.
  • The “SourceFiles” metadata should contain the files to be included in this set (unfortunately, you cannot use wildcards here directly, so you have to create a separate item for them).
  • The “RootPath” metadata contains the folder root of the folder set to be included in the installer. This could also be derived from the source file set (by taking the common root folder), but I like to have it more explicit, like this.
  • The “ParentDirectoryRef” metadata specifies the ID of the <Directory>, where the folder should be included in the installer. Now I have created a directory (Dir_Samples) for the Samples folder in the program files, so I have specified that as parent.

As we are ready with the definition, the next step is to set up the tool. It is very simple; you just have to include the following lines in the Build.properties (or the project file):

<Import Project="$(MSBuildProjectDirectory)\Microsoft.Sdc.Tasks\Microsoft.Sdc.Common.tasks" />

<PropertyGroup>

<CustomAfterWixTargets>$(MSBuildProjectDirectory)\WixFolderInclude.targets</CustomAfterWixTargets>
</
PropertyGroup>

The value of the CustomAfterWixTargets should point to the tool file. If you have it in the project folder, you can use the setting above directly. Also note that the tool uses the Microsofr.Sdc.Tasks library (http://www.codeplex.com/sdctasks). I have tested it with the latest version (2.1.3071.0), but it might work with older versions as well. You should import the Microsoft.Sdc.Common.tasks file only once, so if you have already imported it in your project, you can skip that line.

Now we are done with the entries in the Build.properties, so let’s include the folders in the installer itself. As I have mentioned, the tool generates fragments that contain a component group for each included folder. The component group is named as follows: CG_WixFolderInclude-name. In our case, these are CG_ConsoleApplication1Folder and CG_WebApplication1Folder. So let’s include them in the main feature now:

<Product ...>
  ...

  <!-- setup the folder structure -->
  <
Directory Id="TARGETDIR" Name="SourceDir">
    <
Directory Id="ProgramFilesFolder">
      <
Directory Id="INSTALLLOCATION" Name="WixProject1">
        <
Directory Id="Dir_Samples" Name="Samples">
        </
Directory>
      </
Directory>
    </
Directory>
  </
Directory>

  <!-- include the generated component groups to the main feature -->
  <
Feature Id="ProductFeature" Title="WixProject1" Level="1">
    <
ComponentGroupRef Id="CG_ConsoleApplication1Folder"/>
    <
ComponentGroupRef Id="CG_WebApplication1Folder "/>
  </
Feature>
</
Product>

And that’s it. We are ready to compile!

Fine tuning

The tool supports some additional configuration options, mainly for debugging purposes: you can specify the folder where the temporary files are stored (by default, it is the value of %TMP% environment variable) and whether it should keep the temp files (by default, it deletes them after compilation). These settings can be overridden by including the following lines in the Build.properties.

<PropertyGroup>
  <
WixFolderIncludeTempDir>C:\Temp</WixFolderIncludeTempDir>
  <
WixFolderIncludeKeepTempFiles>true</WixFolderIncludeKeepTempFiles>
</
PropertyGroup>

Possible problems

Of course, life is not that easy... so you might encounter problems with using this tool as well. One is that it kills MsBuild’s up-to-date detection, so it will recompile the project even if nothing has changed. I think this could be solved by specifying some smart output tags on the target, but it is not easy, and usually I want to be sure that the installer package is fully recompiled anyway.

The other – probably more painful – problem is that you cannot include additional files from WIX to a subfolder of an included directory. We had this problem when we wanted to create a shortcut to the solution files of the installed samples. The problem is that since the IDs that the Sdc Fragment task generates are GUIDs, you have no chance of guessing what the subfolder’s ID was.

I have extended the WixFolderInclude.targets to support generating deterministic names for some selected folders. The folders to be selected can be defined with the “DeterministicFolders” metadata tag of the WixFolderInclude item. The value should be a semicolon-separated list of folders relative to the RootPath. Please note that as these are folders, you cannot really use MsBuild’s wildcard support, but you have to type these folder names manually. Let’s suppose that we have a Documentation folder inside the ConsoleApplication1 sample, which we might be able to extend from WIX later. We have to define this as the following:

<ItemGroup>
  <
ConsoleApplication1Files Include="C:\temp\ConsoleApplication1\**" />
  <
WixFolderInclude Include="ConsoleApplication1Folder">
    <
SourceFiles>@(ConsoleApplication1Files)</SourceFiles>
    <
RootPath>C:\temp\ConsoleApplication1</RootPath>
    <
ParentDirectoryRef>Dir_Samples</ParentDirectoryRef>
    <
DeterministicFolders>Documentation</DeterministicFolders>
  </WixFolderInclude>
</
ItemGroup>

As a result, the ID for the Documentation’s <Directory> element will be: Dir_ConsoleApplication1Folder_Documentation, so we can extend it from our Product.wxs:

<DirectoryRef Id="Dir_ConsoleApplication1Folder_Documentation">
  <
Component Id="C_AdditionalFile" Guid="5D8142C1-...">
    <
File Name="AdditionalFile.txt" Source="C:\Temp\AdditionalFile.txt" />
  </
Component>
</
DirectoryRef>
 

Attachment

In the attached ZIP file, you will find the WixFolderInclude.targets file, and also the sample that I have used here to demonstrate the features (without the silly ConsoleApplication1 and WebApplication1 folders). Feel free to use them!

ManyWixFiles.zip (347.55 KB)

Posted by Gáspár

MSBuild | WIX
Friday, 27 June 2008 15:10:30 (W. Europe Daylight Time, UTC+02:00)  #    Disclaimer  |  Comments [2]  | 
 Tuesday, 20 May 2008

I hadn’t touched the topic of web service proxy generation for a long time, but in order to fine tune our new message contract generation framework for Genome, I had to check it out once more.

My concrete problem is very simple: I want to generate a proxy for a web service, but instead of generating some DTO types based on the wsdl, I would like to use my DTO classes that are already implemented (I know that the wsdl-generated ones are just fine, but mine are a little bit better).

The old solution was to let Visual Studio generate the proxy code, and remove the shared type from it. And hope that you don’t have to update it too often, because you will have to do this again. It seems that with the web service proxy there are no real improvements. Although wsdl.exe has some nice settings, like /sharetypes, you cannot invoke it from the “Add Web Reference” dialog. So you have to complicate the development workflow anyway. I wonder why MS did not implement a backdoor, by which I could provide additional wsdl.exe parameters…

The better news is that the WCF client generator can also generate clients for web services. And in the “Add Service Reference” dialog, you can even configure it to reuse types from existing assemblies, if they are referenced in the client project. Super! This is what I wanted. But it does not work :-( … At least not if the service is an ASMX web service (it seems to work fine for WCF services). It still generates my DTO classes.

I have played a lot with it. It seems that the problem is that it does not recognize the matching DTO class, because it is annotated with XML serializer attributes ([XmlType], etc.) and not with WCF attributes. Indeed, if I attribute the class with [DataContract] and [DataMember] attributes, it finds it! However, there is a checking mechanism in the client generator that can check whether the reused type matches the wsdl definition. And it is this that seems to fail, even if I apply exactly the same attributes as it would generate. I have looked around, and it seems that this checking mechanism might fail even for WCF classes.

This is a trap. There is a validation framework that provides false validation errors and cannot even be switched off. So I’m still exactly where I was 5 years ago: manually removing the generated types from reference.cs.

Posted by Gáspár

Genome | WCF
Tuesday, 20 May 2008 14:32:14 (W. Europe Daylight Time, UTC+02:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, 05 February 2008

No, this article does not nag about some code I've seen that misuses new features. This is how I did it - on purpose.

I've always disliked the way I usually set up data in the database for testing: recreate the database, create the domain objects, setting all the necessary properties, commit the context. Take this code for example:

DataDomainSchema schema = DataDomainSchema.LoadFrom("SomeMappingFile");
schema.CreateDbSchema(connStr);

DataDomain dd = new DataDomain(schema, connStr);

using (Context.Push(ShortRunningTransactionContext.Create()))
{
  Customer tt = dd.New<Customer>();
  tt.Name = "TechTalk";

  RootProject tt_hk = dd.New<RootProject>();
  tt_hk.Name = "Housekeeping";

  ChildProject tt_hk_hol = dd.New<ChildProject>();
  tt_hk_hol.Name = "Holiday";
  tt_hk.ChildProjects.Add(tt_hk_hol);

  ChildProject tt_hk_ill = dd.New<ChildProject>();
  tt_hk_ill.Name = "Illness";

  tt_hk.ChildProjects.Add(tt_hk_ill);

  tt.RootProjects.Add(tt_hk);

  RootProject tt_g = dd.New<RootProject>();
  tt_g.Name = "Genome";

  ChildProject tt_g_dev = dd.New<ChildProject>();
  tt_g_dev.Name = "Development";
  tt_g.ChildProjects.Add(tt_g_dev);

  ChildProject tt_g_mnt = dd.New<ChildProject>();
  tt_g_mnt.Name = "Maintenance";
  tt_g.ChildProjects.Add(tt_g_mnt);
  tt.RootProjects.Add(tt_g);

  Context.CommitCurrent();
}

What I dislike in this is the 'setting all the necessary properties' part. Part of it is that it's hard to follow the hierarchy of the objects.

The other is that I'm lazy.

Even if I'm typing with considerable speed - and keep pressing ctrl+(alt)+space and let ReSharper do the rest - I still hate it for its repetitiousness. I always wanted to have something like ActiveRecord's Fixtures in Rails - but I never had the time to implement it. Yeah, typical excuse, and that's how we usually lose development time even in the short run, so I know I'll have do it the next time I need to create test data.

Sure, I could always create builder methods for every type to handle, passing in the property values and collections etc, but even creating those is yet another repetitious task. I always longed for some more 'elegant' write-once-use-everywhere kind of framework. So when I read this post, I thought, maybe I can get away with writing a simple, but usable enough, initializer helper extension. Here's the resulting initializing code:

...

using (Context.Push(ShortRunningTransactionContext.Create()))
{
  dd.Init<Customer>().As(
     Name => "TechTalk",
     RootProjects => new Project[] {
       dd.Init<RootProject>().As(
         Name => "Housekeeping", 
         ChildProjects => new Project[] {
           dd.Init<ChildProject>().As(Name => "Holiday"),
           dd.Init<ChildProject>().As(Name => "Illness")
         }),
       dd.Init<RootProject>().As(
         Name => "Genome", 
         ChildProjects => new Project[] {
           dd.Init<ChildProject>().As(Name => "Development"),
           dd.Init<ChildProject>().As(Name => "Maintenance")
         })
       });

  Context.CommitCurrent();
}

Prettier to the eye - but unfortunately, it's still not practical enough. For one thing, it’s easy to represent a tree this way, but it still doesn't offer a solution for many-to-many relations. That's a lesser concern though, and I have ideas for overcoming this (but haven’t done it so far due to lack of time, again). A greater problem is that it's not type safe: the parameter names of the lambdas (Name, RootProjects, ChildProjects) are just that - names, aliases; they are not checked during compile time. Even as a dynamic typed language advocate, I don't like too much dynamic behavior in statically type languages - that usually results in little gain if any, while losing their advantages, even 'developer-side' ones, like refactoring or intellisense support.

So, no conclusions there - I don't know which way I prefer yet. It seems that I really will have to go on and write some xml-file based initialization library (which will share some of the abovementioned problems of the non-static languages, of course, but renaming those properties in the config by hand which you just modified in the code at least feels a bit more normal).

Still, if you're interested, here's the extension for doing the job:

public static class DataDomainInitializerExtension

{
  public static DataDomainInitializer<T> Init<T>(
      this DataDomain dd, params object[] parameters)
  {
    return new DataDomainInitializer<T>(dd.New<T>(parameters));
  }
}

public class DataDomainInitializer<T>
{
  private readonly T target;
  public DataDomainInitializer(T obj)
  {
    this.target = obj;
  }

  public T As(params Expression<Func<string, object>>[] expressions)
  {
    foreach (Expression<Func<string, object>> expression in expressions)
    {
      object value = GetValue(expression.Body);
      string key = expression.Parameters[0].Name;

      PropertyInfo property = typeof(T).GetProperty(key, 
        BindingFlags.Instance
        |BindingFlags.Public
        |BindingFlags.NonPublic);

      Type collectionType = GetCollectionType(property.PropertyType);
      if (collectionType != null)
      {
        CopyCollection(property, collectionType, value);
      }
      else
      {
        property.SetValue(target, value, null);
      }
    }
    return target;
  }

  private void CopyCollection(
      PropertyInfo property, Type collectionType, object collection)
  {
    object targetProperty = property.GetValue(target, null);

    MethodInfo addMethod = collectionType.GetMethod("Add");
    foreach (object enumValue in (IEnumerable)collection)
    {
      addMethod.Invoke(targetProperty, 
                       new object[] { enumValue });
    }
  }

  private static Type GetCollectionType(Type type)
  {
    foreach (Type @interface in type.GetInterfaces())
      if (@interface.IsGenericType && 
          @interface.GetGenericTypeDefinition() 
            == typeof(ICollection<>))
          return @interface;

     return null;
  }

  private static object GetValue(Expression expression)
  {
     ConstantExpression constExpr = expression as ConstantExpression;
     if (constExpr != null)
       return constExpr.Value;
     return (Expression.Lambda<Func<object>>(expression).Compile())();
  }

}

Posted by Attila.

Genome | Linq
Tuesday, 05 February 2008 13:31:19 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, 22 January 2008

With Genome, you can map standard 1:n and n:m collections for foreign-key/association table database patterns out of the box by using Collection<T> and <OneToManyCollection/> or <ManyToManyCollection/>.

Compared to arbitrary relationships, which can also be mapped with Genome by using Set<T> and a query, Collection<T> offers the following additional functionality:

  • Elements can be explicitly added and removed from the collection.
  • The collection is fully loaded into memory and kept consistent with in-memory object graph modifications.
  • For n:m collections, Genome can fully hide the association class (mapping the database association table) from the domain model if required.

However, for n:m collections, where the association class is annotated with additional values (besides the foreign keys), the standard Collection<T> mapping does not fit.

To provide better support for those mapping scenarios, I have created a Dictionary-like implementation for annotated many-to-many associations, where we can build the functionality on the existing collection support.

Example

I will use a simple domain model to present the idea. Let’s say we have Departments and Employees in our domain. An employee can work in multiple departments, and a department can have more than one employee. This classic many-to-many association is annotated with a job description. The job description is encapsulated in a struct called Job.

So the logical view looks like this:

In the database, we represent this kind of association with an association class/table as follows:

The task is to implement the Department.Employees property, which represents the annotated n:m relation in a consistent way.

Representing an annotated n:m relationshop in the domain model

In my opinion the best representation for Department.Employees is an IDictionary<Employee, Job>. It is ideal because the employees must be unique within the collection, and the annotation data can be accessed if you additionally specify an Employee (index into the dictionary with that employee). Note that this representation is only possible if the annotation can be represented with a single typ; however, you can encapsulate the annotations with a struct or class to achieve this at any time. You can use the <EmbeddedStruct/> mapping feature to map this struct on the EmployedAs class.

Mapping the association table as a one-to-many collection

First we have to map the one-to-many collection (Department.EmployedAsCollection):

protected abstract Collection<EmployedAs> EmployedAsCollection { get; }

<Member name="EmployedAsCollection">
  <OneToManyCollection parentReference="Department"/>
</Member>

Wrapping the association table into an annotated association

We will wrap this collection with a dictionary implementation to represent the annotated association. I have created a helper class AnnotatedManyToManyDictionary that carries out all necessary transformations. This strongly typed helper needs 4 generic parameters, as you have to specify the association class (TAssoc=EmployedAs), the class owning the collection (TOwner=Department), the “other side” of the association (that is, the key in the dictionary, TKey=Employee) and the annotation that is the value in the dictionary (TValue=Job). Basically, you have to wrap the collection with this helper:

public IDictionary<Employee, Job> Employees
{
  get 
  { 
    return new AnnotatedManyToManyDictionary<EmployedAs, Department, Employee, Job>
      (this, EmployedAsCollection, EmployedAsDepartmentEmployeeJobAccessor.Instance);
  }
}

Helper strategy implementation for getting and setting the keys and values of an association item

The helper class manages the underlying one-to-many collection and the association items to provide the required behavior. As you probably noticed in the constructor call, it still needs a little bit of help. You have to pass a strategy that “knows” how to get and set the key and value properties of the association item. In the current example, the EmployedAsEmployeeJobAccessor strategy knows how to get and set the Employee and Job properties on an EmployedAs object. Currently you have to write this piece of code to make that work:

private class EmployedAsEmployeeJobAccessor : 
  IAnnotatedManyToManyDictionaryAssociationAccessor<EmployedAs, Employee, Job>
{
  public static readonly EmployedAsEmployeeJobAccessor Instance =
    new EmployedAsEmployeeJobAccessor();

  public Employee GetKey(EmployedAs assoc)
  {
    return assoc.Employee;
  }

  public void SetKey(EmployedAs assoc, Employee key)
  {
    assoc.Employee = key;
  }

  public Job GetValue(EmployedAs assoc)
  {
    return assoc.Job;
  }

  public void SetValue(EmployedAs assoc, Job value)
  {
    assoc.Job = value;
  }
}

Usage

Having done this, you can easily iterate through the employees in a department:

Department dep = GetSomeDepartment();
foreach(Employee e in dep.Employees.Key) { ... }

You can also iterate through the association elements to retrieve the associated employees of a department along with their job:

foreach(KeyValuePair<Employee,Job> in d1.Employees) { ... }

The job of an employee now depends on the associated department. The indexer of the employees collection takes an associated employee and looks up the job annotated to association:

Employee emp = GetSomeEmployee();
Job assignedJob = dep.Employees[emp];

Similarly, the job of an employee can be set for a specific department association:

dep.Employees[emp] = assignedJob;

Finally, when associating an employee to a department, the job annotation has to be specified as well:

dep.Employees.Add(emp, assignedJob);
Removing just requires the key, without the annotation:
dep.Employees.Remove(emp);

Limitations

The first limitation is performance with larger collections.. The current implementation uses a linear search for looking up the employee key in the collection, which can cause a performance hit in larger collections when adding or removing items or getting an item’s annotation (using the indexer). The reason for this is that I didn’t want to replace Genome’s internal representation of 1:n collections with a dictionary implementation.

The second limitation is that you need to manually code the helper strategy for getting and setting the annotation value in the association items.

Based on your feedback, we might implement this as a native mapping feature in an upcoming Genome release, thus resolving both limitations described.

Sample code

Please find the source code for the example described above attached to this article.

AnnotatedManyToManyAssociation.zip

Posted by TZ.

Tuesday, 22 January 2008 16:41:14 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [2]  | 
 Friday, 18 January 2008
The using statement can be a little bit dangerous at times ...
WCF
Friday, 18 January 2008 22:31:47 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, 09 January 2008
If you are using Visual Studio 2008 for a project, but are still using an old TFS and an old build server (which is quite likely at the moment), you should prepare for at least some inconveniences.
TFS
Wednesday, 09 January 2008 16:23:00 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  |