Tuesday, 01 July 2008

Microsoft has released .NET Framework 3.5. service pack 1 beta. For all Genome users: be aware that this package rather quietly also contains an SP2 beta for the .NET 2.0 Framework, which cannot be deployed alone. There is not too much information available on its exact contents (see http://readcommit.blogspot.com/2008/05/microsoft-net-framework-20-service-pack.html).

Be warned that it is possible that a Genome schema compiled with the SP beta may not load on a machine without the SP beta (which is usual for production servers), yielding the following error:

SerializationException: The object with ID 40221 implements the IObjectReference interface for which all dependencies cannot be resolved. The likely cause is two instances of IObjectReference that have a mutual dependency on each other.]

We are sorry about any inconvenience caused. This issue may occur with Genome V3.3.4.38 and we are currently investigating if others are affected. We are providing feedback on this issue to Microsoft and hope that it will be resolved with the release - in the meantime, please make sure that deployment packages are generated on a machine that does not have SP1 beta installed!

Tuesday, 01 July 2008 11:30:48 (W. Europe Daylight Time, UTC+02:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, 27 June 2008

Many posts (e.g.: http://blog.deploymentengineering.com/2007/06/dealing-with-very-large-number-of-files.html, http://www.wintellect.com/cs/blogs/jrobbins/archive/2007/10/19/wix-the-pain-of-wix-part-2-of-3.aspx, http://blog.deploymentengineering.com/2007/06/burning-in-hell-with-heat.html) have been written about the problem of adding large number of files to a WIX installer. This problem is the most painful when you want to add content files that do not really have any special purpose, but just have to be there (e.g. code samples or source code packages).

I also struggled with this problem, and finally I found myself creating a small MsBuild tool (WixFolderInclude.targets) that you can include in your WIX project and use to generate file installers for entire folders on the disk. :-) I call it a tool, as I don’t have a better name for it, but it is not (only) an MsBuild target, nor is it a Task. Actually it is a WIX MsBuild extension, but WIX already has a “WIX extension” term, which is something else. So let’s stick to “tool”. 

The WixFolderInclude tool

Let’s see how you can use this tool; it was tested with the latest WIX framework (v3.0.4220), but it probably works with older v3x versions as well. I’m assuming that you are more or less familiar with the WIX and MsBuild concepts. If not, you can grab the necessary information quickly from Gábor Deák Jahn's WiX Tutorial and the MSDN docs.

WIX projects (.wixproj) are MsBuild projects, and you can extend them with other MsBuild property definitions or targets. One option is to modify the wixproj file in a text editor… This is fine, but I like the WIX project to open in Visual Studio, and in this case modifying the project file is not easy. Instead, I usually always start by creating a “Build.properties” file in the WIX project (it has type “Content” so it does not modify the WIX compilation), where I can write my MsBuild extensions. I have to modify the project file only once, when I include the Build.properties file. I usually include it directly before the wix.targets import:

</ItemGroup>
<Import Project="$(MSBuildProjectDirectory)\Build.properties" />
<Import Project="$(WixTargetsPath)" />

But you can directly write into the project file as well, if you don’t use the VS integration.

Let’s take a very simple example: I would like to include two code samples in the installer. They are located in some folder (C:\Temp\ConsoleApplication1 and C:\Temp\WebApplication1) and I would like to install them in a “Samples” folder inside the program files entry of my installed application. Of course both samples contain sub-folders that I also want to include.

To achieve that with my tool,

  • you have to define MsBuild project items that describe the aspects of the installation of these folders
  • you have to define some property to utilize my tool
  • the tool generates some temporary WIX fragment files during compilation (and includes them in the compilation), which contain the definition of the Directory/Component/File structure and a component group that gathers the components generated for the files in the directory structure.
  • you have to include references to the generated component groups in the installation features of your choice in the normal wxs files (e.g. Program.wxs).

So first, let’s create the folder descriptions for my sample. The tool searches for project items called “WixFolderInclude”, so we have to create such items for the folders we want to include:

<ItemGroup>
  <
ConsoleApplication1Files Include="C:\temp\ConsoleApplication1\**" />
  <
WixFolderInclude Include="ConsoleApplication1Folder">
    <
SourceFiles>@(ConsoleApplication1Files)</SourceFiles>
    <
RootPath>C:\temp\ConsoleApplication1</RootPath>
    <
ParentDirectoryRef>Dir_Samples</ParentDirectoryRef>
  </
WixFolderInclude>
</
ItemGroup>

<ItemGroup>
  <
WebApplication1Files Include="C:\temp\WebApplication1\**" Exclude="*.scc" />
  <
WixFolderInclude Include="WebApplication1Folder">
    <
SourceFiles>@(WebApplication1Files)</SourceFiles>
    <
RootPath>C:\temp\WebApplication1Files</RootPath>
    <
ParentDirectoryRef>Dir_Samples</ParentDirectoryRef>
  </
WixFolderInclude>
</
ItemGroup>

As you can see, you can define the set of files to be included with the standard possibilities of MsBuild, so you can include deep folder structures, exclude files, or even list the files one-by-one. In the example here I have excluded the source-control info files (*.scc) from the second sample.

In the WixFolderInclude items, you have to note the following things.

  • The main entry (ConsoleApplication1Folder and WebApplication1Folder) describes the name of the folder installation. The generated component group ID will be based on this name, so you can use any meaningful name here, not necessarily the folder name.
  • The “SourceFiles” metadata should contain the files to be included in this set (unfortunately, you cannot use wildcards here directly, so you have to create a separate item for them).
  • The “RootPath” metadata contains the folder root of the folder set to be included in the installer. This could also be derived from the source file set (by taking the common root folder), but I like to have it more explicit, like this.
  • The “ParentDirectoryRef” metadata specifies the ID of the <Directory>, where the folder should be included in the installer. Now I have created a directory (Dir_Samples) for the Samples folder in the program files, so I have specified that as parent.

As we are ready with the definition, the next step is to set up the tool. It is very simple; you just have to include the following lines in the Build.properties (or the project file):

<Import Project="$(MSBuildProjectDirectory)\Microsoft.Sdc.Tasks\Microsoft.Sdc.Common.tasks" />

<PropertyGroup>

<CustomAfterWixTargets>$(MSBuildProjectDirectory)\WixFolderInclude.targets</CustomAfterWixTargets>
</
PropertyGroup>

The value of the CustomAfterWixTargets should point to the tool file. If you have it in the project folder, you can use the setting above directly. Also note that the tool uses the Microsofr.Sdc.Tasks library (http://www.codeplex.com/sdctasks). I have tested it with the latest version (2.1.3071.0), but it might work with older versions as well. You should import the Microsoft.Sdc.Common.tasks file only once, so if you have already imported it in your project, you can skip that line.

Now we are done with the entries in the Build.properties, so let’s include the folders in the installer itself. As I have mentioned, the tool generates fragments that contain a component group for each included folder. The component group is named as follows: CG_WixFolderInclude-name. In our case, these are CG_ConsoleApplication1Folder and CG_WebApplication1Folder. So let’s include them in the main feature now:

<Product ...>
  ...

  <!-- setup the folder structure -->
  <
Directory Id="TARGETDIR" Name="SourceDir">
    <
Directory Id="ProgramFilesFolder">
      <
Directory Id="INSTALLLOCATION" Name="WixProject1">
        <
Directory Id="Dir_Samples" Name="Samples">
        </
Directory>
      </
Directory>
    </
Directory>
  </
Directory>

  <!-- include the generated component groups to the main feature -->
  <
Feature Id="ProductFeature" Title="WixProject1" Level="1">
    <
ComponentGroupRef Id="CG_ConsoleApplication1Folder"/>
    <
ComponentGroupRef Id="CG_WebApplication1Folder "/>
  </
Feature>
</
Product>

And that’s it. We are ready to compile!

Fine tuning

The tool supports some additional configuration options, mainly for debugging purposes: you can specify the folder where the temporary files are stored (by default, it is the value of %TMP% environment variable) and whether it should keep the temp files (by default, it deletes them after compilation). These settings can be overridden by including the following lines in the Build.properties.

<PropertyGroup>
  <
WixFolderIncludeTempDir>C:\Temp</WixFolderIncludeTempDir>
  <
WixFolderIncludeKeepTempFiles>true</WixFolderIncludeKeepTempFiles>
</
PropertyGroup>

Possible problems

Of course, life is not that easy... so you might encounter problems with using this tool as well. One is that it kills MsBuild’s up-to-date detection, so it will recompile the project even if nothing has changed. I think this could be solved by specifying some smart output tags on the target, but it is not easy, and usually I want to be sure that the installer package is fully recompiled anyway.

The other – probably more painful – problem is that you cannot include additional files from WIX to a subfolder of an included directory. We had this problem when we wanted to create a shortcut to the solution files of the installed samples. The problem is that since the IDs that the Sdc Fragment task generates are GUIDs, you have no chance of guessing what the subfolder’s ID was.

I have extended the WixFolderInclude.targets to support generating deterministic names for some selected folders. The folders to be selected can be defined with the “DeterministicFolders” metadata tag of the WixFolderInclude item. The value should be a semicolon-separated list of folders relative to the RootPath. Please note that as these are folders, you cannot really use MsBuild’s wildcard support, but you have to type these folder names manually. Let’s suppose that we have a Documentation folder inside the ConsoleApplication1 sample, which we might be able to extend from WIX later. We have to define this as the following:

<ItemGroup>
  <
ConsoleApplication1Files Include="C:\temp\ConsoleApplication1\**" />
  <
WixFolderInclude Include="ConsoleApplication1Folder">
    <
SourceFiles>@(ConsoleApplication1Files)</SourceFiles>
    <
RootPath>C:\temp\ConsoleApplication1</RootPath>
    <
ParentDirectoryRef>Dir_Samples</ParentDirectoryRef>
    <
DeterministicFolders>Documentation</DeterministicFolders>
  </WixFolderInclude>
</
ItemGroup>

As a result, the ID for the Documentation’s <Directory> element will be: Dir_ConsoleApplication1Folder_Documentation, so we can extend it from our Product.wxs:

<DirectoryRef Id="Dir_ConsoleApplication1Folder_Documentation">
  <
Component Id="C_AdditionalFile" Guid="5D8142C1-...">
    <
File Name="AdditionalFile.txt" Source="C:\Temp\AdditionalFile.txt" />
  </
Component>
</
DirectoryRef>
 

Attachment

In the attached ZIP file, you will find the WixFolderInclude.targets file, and also the sample that I have used here to demonstrate the features (without the silly ConsoleApplication1 and WebApplication1 folders). Feel free to use them!

ManyWixFiles.zip (347.55 KB)

Posted by Gáspár

MSBuild | WIX
Friday, 27 June 2008 15:10:30 (W. Europe Daylight Time, UTC+02:00)  #    Disclaimer  |  Comments [2]  | 
 Tuesday, 20 May 2008

I hadn’t touched the topic of web service proxy generation for a long time, but in order to fine tune our new message contract generation framework for Genome, I had to check it out once more.

My concrete problem is very simple: I want to generate a proxy for a web service, but instead of generating some DTO types based on the wsdl, I would like to use my DTO classes that are already implemented (I know that the wsdl-generated ones are just fine, but mine are a little bit better).

The old solution was to let Visual Studio generate the proxy code, and remove the shared type from it. And hope that you don’t have to update it too often, because you will have to do this again. It seems that with the web service proxy there are no real improvements. Although wsdl.exe has some nice settings, like /sharetypes, you cannot invoke it from the “Add Web Reference” dialog. So you have to complicate the development workflow anyway. I wonder why MS did not implement a backdoor, by which I could provide additional wsdl.exe parameters…

The better news is that the WCF client generator can also generate clients for web services. And in the “Add Service Reference” dialog, you can even configure it to reuse types from existing assemblies, if they are referenced in the client project. Super! This is what I wanted. But it does not work :-( … At least not if the service is an ASMX web service (it seems to work fine for WCF services). It still generates my DTO classes.

I have played a lot with it. It seems that the problem is that it does not recognize the matching DTO class, because it is annotated with XML serializer attributes ([XmlType], etc.) and not with WCF attributes. Indeed, if I attribute the class with [DataContract] and [DataMember] attributes, it finds it! However, there is a checking mechanism in the client generator that can check whether the reused type matches the wsdl definition. And it is this that seems to fail, even if I apply exactly the same attributes as it would generate. I have looked around, and it seems that this checking mechanism might fail even for WCF classes.

This is a trap. There is a validation framework that provides false validation errors and cannot even be switched off. So I’m still exactly where I was 5 years ago: manually removing the generated types from reference.cs.

Posted by Gáspár

Genome | WCF
Tuesday, 20 May 2008 14:32:14 (W. Europe Daylight Time, UTC+02:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, 05 February 2008

No, this article does not nag about some code I've seen that misuses new features. This is how I did it - on purpose.

I've always disliked the way I usually set up data in the database for testing: recreate the database, create the domain objects, setting all the necessary properties, commit the context. Take this code for example:

DataDomainSchema schema = DataDomainSchema.LoadFrom("SomeMappingFile");
schema.CreateDbSchema(connStr);

DataDomain dd = new DataDomain(schema, connStr);

using (Context.Push(ShortRunningTransactionContext.Create()))
{
  Customer tt = dd.New<Customer>();
  tt.Name = "TechTalk";

  RootProject tt_hk = dd.New<RootProject>();
  tt_hk.Name = "Housekeeping";

  ChildProject tt_hk_hol = dd.New<ChildProject>();
  tt_hk_hol.Name = "Holiday";
  tt_hk.ChildProjects.Add(tt_hk_hol);

  ChildProject tt_hk_ill = dd.New<ChildProject>();
  tt_hk_ill.Name = "Illness";

  tt_hk.ChildProjects.Add(tt_hk_ill);

  tt.RootProjects.Add(tt_hk);

  RootProject tt_g = dd.New<RootProject>();
  tt_g.Name = "Genome";

  ChildProject tt_g_dev = dd.New<ChildProject>();
  tt_g_dev.Name = "Development";
  tt_g.ChildProjects.Add(tt_g_dev);

  ChildProject tt_g_mnt = dd.New<ChildProject>();
  tt_g_mnt.Name = "Maintenance";
  tt_g.ChildProjects.Add(tt_g_mnt);
  tt.RootProjects.Add(tt_g);

  Context.CommitCurrent();
}

What I dislike in this is the 'setting all the necessary properties' part. Part of it is that it's hard to follow the hierarchy of the objects.

The other is that I'm lazy.

Even if I'm typing with considerable speed - and keep pressing ctrl+(alt)+space and let ReSharper do the rest - I still hate it for its repetitiousness. I always wanted to have something like ActiveRecord's Fixtures in Rails - but I never had the time to implement it. Yeah, typical excuse, and that's how we usually lose development time even in the short run, so I know I'll have do it the next time I need to create test data.

Sure, I could always create builder methods for every type to handle, passing in the property values and collections etc, but even creating those is yet another repetitious task. I always longed for some more 'elegant' write-once-use-everywhere kind of framework. So when I read this post, I thought, maybe I can get away with writing a simple, but usable enough, initializer helper extension. Here's the resulting initializing code:

...

using (Context.Push(ShortRunningTransactionContext.Create()))
{
  dd.Init<Customer>().As(
     Name => "TechTalk",
     RootProjects => new Project[] {
       dd.Init<RootProject>().As(
         Name => "Housekeeping", 
         ChildProjects => new Project[] {
           dd.Init<ChildProject>().As(Name => "Holiday"),
           dd.Init<ChildProject>().As(Name => "Illness")
         }),
       dd.Init<RootProject>().As(
         Name => "Genome", 
         ChildProjects => new Project[] {
           dd.Init<ChildProject>().As(Name => "Development"),
           dd.Init<ChildProject>().As(Name => "Maintenance")
         })
       });

  Context.CommitCurrent();
}

Prettier to the eye - but unfortunately, it's still not practical enough. For one thing, it’s easy to represent a tree this way, but it still doesn't offer a solution for many-to-many relations. That's a lesser concern though, and I have ideas for overcoming this (but haven’t done it so far due to lack of time, again). A greater problem is that it's not type safe: the parameter names of the lambdas (Name, RootProjects, ChildProjects) are just that - names, aliases; they are not checked during compile time. Even as a dynamic typed language advocate, I don't like too much dynamic behavior in statically type languages - that usually results in little gain if any, while losing their advantages, even 'developer-side' ones, like refactoring or intellisense support.

So, no conclusions there - I don't know which way I prefer yet. It seems that I really will have to go on and write some xml-file based initialization library (which will share some of the abovementioned problems of the non-static languages, of course, but renaming those properties in the config by hand which you just modified in the code at least feels a bit more normal).

Still, if you're interested, here's the extension for doing the job:

public static class DataDomainInitializerExtension

{
  public static DataDomainInitializer<T> Init<T>(
      this DataDomain dd, params object[] parameters)
  {
    return new DataDomainInitializer<T>(dd.New<T>(parameters));
  }
}

public class DataDomainInitializer<T>
{
  private readonly T target;
  public DataDomainInitializer(T obj)
  {
    this.target = obj;
  }

  public T As(params Expression<Func<string, object>>[] expressions)
  {
    foreach (Expression<Func<string, object>> expression in expressions)
    {
      object value = GetValue(expression.Body);
      string key = expression.Parameters[0].Name;

      PropertyInfo property = typeof(T).GetProperty(key, 
        BindingFlags.Instance
        |BindingFlags.Public
        |BindingFlags.NonPublic);

      Type collectionType = GetCollectionType(property.PropertyType);
      if (collectionType != null)
      {
        CopyCollection(property, collectionType, value);
      }
      else
      {
        property.SetValue(target, value, null);
      }
    }
    return target;
  }

  private void CopyCollection(
      PropertyInfo property, Type collectionType, object collection)
  {
    object targetProperty = property.GetValue(target, null);

    MethodInfo addMethod = collectionType.GetMethod("Add");
    foreach (object enumValue in (IEnumerable)collection)
    {
      addMethod.Invoke(targetProperty, 
                       new object[] { enumValue });
    }
  }

  private static Type GetCollectionType(Type type)
  {
    foreach (Type @interface in type.GetInterfaces())
      if (@interface.IsGenericType && 
          @interface.GetGenericTypeDefinition() 
            == typeof(ICollection<>))
          return @interface;

     return null;
  }

  private static object GetValue(Expression expression)
  {
     ConstantExpression constExpr = expression as ConstantExpression;
     if (constExpr != null)
       return constExpr.Value;
     return (Expression.Lambda<Func<object>>(expression).Compile())();
  }

}

Posted by Attila.

Genome | Linq
Tuesday, 05 February 2008 13:31:19 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, 22 January 2008

With Genome, you can map standard 1:n and n:m collections for foreign-key/association table database patterns out of the box by using Collection<T> and <OneToManyCollection/> or <ManyToManyCollection/>.

Compared to arbitrary relationships, which can also be mapped with Genome by using Set<T> and a query, Collection<T> offers the following additional functionality:

  • Elements can be explicitly added and removed from the collection.
  • The collection is fully loaded into memory and kept consistent with in-memory object graph modifications.
  • For n:m collections, Genome can fully hide the association class (mapping the database association table) from the domain model if required.

However, for n:m collections, where the association class is annotated with additional values (besides the foreign keys), the standard Collection<T> mapping does not fit.

To provide better support for those mapping scenarios, I have created a Dictionary-like implementation for annotated many-to-many associations, where we can build the functionality on the existing collection support.

Example

I will use a simple domain model to present the idea. Let’s say we have Departments and Employees in our domain. An employee can work in multiple departments, and a department can have more than one employee. This classic many-to-many association is annotated with a job description. The job description is encapsulated in a struct called Job.

So the logical view looks like this:

In the database, we represent this kind of association with an association class/table as follows:

The task is to implement the Department.Employees property, which represents the annotated n:m relation in a consistent way.

Representing an annotated n:m relationshop in the domain model

In my opinion the best representation for Department.Employees is an IDictionary<Employee, Job>. It is ideal because the employees must be unique within the collection, and the annotation data can be accessed if you additionally specify an Employee (index into the dictionary with that employee). Note that this representation is only possible if the annotation can be represented with a single typ; however, you can encapsulate the annotations with a struct or class to achieve this at any time. You can use the <EmbeddedStruct/> mapping feature to map this struct on the EmployedAs class.

Mapping the association table as a one-to-many collection

First we have to map the one-to-many collection (Department.EmployedAsCollection):

protected abstract Collection<EmployedAs> EmployedAsCollection { get; }

<Member name="EmployedAsCollection">
  <OneToManyCollection parentReference="Department"/>
</Member>

Wrapping the association table into an annotated association

We will wrap this collection with a dictionary implementation to represent the annotated association. I have created a helper class AnnotatedManyToManyDictionary that carries out all necessary transformations. This strongly typed helper needs 4 generic parameters, as you have to specify the association class (TAssoc=EmployedAs), the class owning the collection (TOwner=Department), the “other side” of the association (that is, the key in the dictionary, TKey=Employee) and the annotation that is the value in the dictionary (TValue=Job). Basically, you have to wrap the collection with this helper:

public IDictionary<Employee, Job> Employees
{
  get 
  { 
    return new AnnotatedManyToManyDictionary<EmployedAs, Department, Employee, Job>
      (this, EmployedAsCollection, EmployedAsDepartmentEmployeeJobAccessor.Instance);
  }
}

Helper strategy implementation for getting and setting the keys and values of an association item

The helper class manages the underlying one-to-many collection and the association items to provide the required behavior. As you probably noticed in the constructor call, it still needs a little bit of help. You have to pass a strategy that “knows” how to get and set the key and value properties of the association item. In the current example, the EmployedAsEmployeeJobAccessor strategy knows how to get and set the Employee and Job properties on an EmployedAs object. Currently you have to write this piece of code to make that work:

private class EmployedAsEmployeeJobAccessor : 
  IAnnotatedManyToManyDictionaryAssociationAccessor<EmployedAs, Employee, Job>
{
  public static readonly EmployedAsEmployeeJobAccessor Instance =
    new EmployedAsEmployeeJobAccessor();

  public Employee GetKey(EmployedAs assoc)
  {
    return assoc.Employee;
  }

  public void SetKey(EmployedAs assoc, Employee key)
  {
    assoc.Employee = key;
  }

  public Job GetValue(EmployedAs assoc)
  {
    return assoc.Job;
  }

  public void SetValue(EmployedAs assoc, Job value)
  {
    assoc.Job = value;
  }
}

Usage

Having done this, you can easily iterate through the employees in a department:

Department dep = GetSomeDepartment();
foreach(Employee e in dep.Employees.Key) { ... }

You can also iterate through the association elements to retrieve the associated employees of a department along with their job:

foreach(KeyValuePair<Employee,Job> in d1.Employees) { ... }

The job of an employee now depends on the associated department. The indexer of the employees collection takes an associated employee and looks up the job annotated to association:

Employee emp = GetSomeEmployee();
Job assignedJob = dep.Employees[emp];

Similarly, the job of an employee can be set for a specific department association:

dep.Employees[emp] = assignedJob;

Finally, when associating an employee to a department, the job annotation has to be specified as well:

dep.Employees.Add(emp, assignedJob);
Removing just requires the key, without the annotation:
dep.Employees.Remove(emp);

Limitations

The first limitation is performance with larger collections.. The current implementation uses a linear search for looking up the employee key in the collection, which can cause a performance hit in larger collections when adding or removing items or getting an item’s annotation (using the indexer). The reason for this is that I didn’t want to replace Genome’s internal representation of 1:n collections with a dictionary implementation.

The second limitation is that you need to manually code the helper strategy for getting and setting the annotation value in the association items.

Based on your feedback, we might implement this as a native mapping feature in an upcoming Genome release, thus resolving both limitations described.

Sample code

Please find the source code for the example described above attached to this article.

AnnotatedManyToManyAssociation.zip

Posted by TZ.

Tuesday, 22 January 2008 16:41:14 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [2]  | 
 Friday, 18 January 2008
The using statement can be a little bit dangerous at times ...
WCF
Friday, 18 January 2008 22:31:47 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, 09 January 2008
If you are using Visual Studio 2008 for a project, but are still using an old TFS and an old build server (which is quite likely at the moment), you should prepare for at least some inconveniences.
TFS
Wednesday, 09 January 2008 16:23:00 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, 04 January 2008
Genome has been developed with Team Foundation Server (TFS) for some time now, and it might be interesting to know in this context that TechTalk is a Visual Studio Inner Circle partner. TFS has proven to be a good source control system, but there are a few points that could do with a bit of improvement (particularly when compared to Subversion (SVN), with which I have extensive experience; that said, there are some features that both systems lack).
TFS
Friday, 04 January 2008 16:19:22 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, 15 November 2007
As I am writing this, I am just heading home from TechEd, waiting at the Barcelona airport for my return flight to Vienna. It has been a busy time for the Genome team since September - unfortunately so busy that we couldn’t take time to blog about all the things that are going on. We weren't at TechEd only as attendees, but also exhibiting in the Visual Studio Partner area, with a total of 8 TechTalkers in Barcelona. To catch up with all the things that have happened since September, I’ll start with TechEd, while the memories are still fresh.
Thursday, 15 November 2007 20:02:27 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, 08 November 2007
Typically, Genome is used to map tables to their data domain objects. But what to do when you have to use a database that is not made for mapping objects and therefore is not in any normalization form etc.?
Thursday, 08 November 2007 16:23:06 (W. Europe Standard Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  |