Showing posts with label PLINQ. Show all posts
Showing posts with label PLINQ. Show all posts

Wednesday, October 15, 2008

New Transactional Memory Blog On MSDN

There is a Transactional Memory Blog On MSDN.

Here is an excerpt

If you have been using the Parallel Extension CTP or simply writing multi-threaded code yourself, you probably have run into situations where you needed to share data between multiple threads. So long as the data is read-only, this isn’t a problem, but what about mutating data?

The easy answer is to use a lock. There are a lot of blog entries and white papers talking about how to use locks correctly, how to avoid deadlocks, or what are the best locks for the particular scenario, or even how to correctly write lock-free code. You could read all of these and still run into trouble using locks. You see, the problem isn’t sharing one piece of data; it’s when you are sharing multiple pieces of data – for instance data that has a complex schema involving multiple complex objects such as trees or lists.

So, locks are a basic tool in your arsenal. From the simple lock, you can build synchronization mechanisms that hopefully protect your data, correctly, and don’t impact your scalability.

Ah, I hear you sigh. Yes, hope is eternal, software has bugs and multithreaded software has race conditions, deadlocks, and scalability problems. Why? Well, because many find it is hard and fraught with peril to correctly use anything more than a single lock or at most some really small set of course-grained locks. As code matures, locking hierarchies to provide fine-grained-locking often morph from elegant to clumsy. You may also find that as your project grows, lock depth blossoms, unnecessarily, or alternatively race conditions are introduced simply because programmers were unaware that it necessary to lock a specific resource. The end result is code that simply doesn’t scale or your application’s reliability plummets without some of your best and brightest spending time tuning, fixing, “right-sizing” and eliminating locks. Even after all that work, are you confident that your code is bug-free? Do race conditions exist in it?


Subscribe to this blog here: http://blogs.msdn.com/stmteam/default.aspx

Friday, December 28, 2007

640K Is Enough For Anyone, 64 Cores is Enough For Anyone, Windows Cannot Handle More Than 64 Cores At The Moment

It is déjà vu all over again, someone decided that 64 cores is enough for everyone and Windows currently cannot handle the 80 core Intel CPUs when they come out.

To carry out multitasking, Microsoft Windows 2000 and Windows Server 2003 sometimes move process threads among different processors. Although efficient from an operating system point of view as each processor cache is repeatedly reloaded with data.
Assigning processors to specific threads can improve performance under these conditions by eliminating processor reloads and reducing thread migration across processors (thereby reducing context switching); such an association between a thread and a processor is called processor affinity.

I guess your Data Parallel Haskell or Parallel LINQ(PLINQ) will have to wait for another version of Windows to take advantage of 80 cores :-(

Sunday, December 9, 2007

Screencast: Parallel LINQ (PLINQ)

Achieving "Declarative Data Parallelism" with the Parallel Extensions is achieved through Parallel LINQ (PLINQ). This 20' video explains how to parallelise your LINQ queries, how it works under the covers and how to configure it for advanced scenarios.

Watch the screencast(WMV)