View other SharePoint posts...
I've recently been assigned the task of setting up and documenting the SharePoint development environment. If you were dismayed by the vagaries of the installation process, you've seen nothing yet. Although SharePoint is built upon ASP.NET, this is an Office product. The Office team have always been the cash cow of Microsoft. Apparently, their priority was to make this product user friendly. And if your company is so large as to have thousands of sites, I can see no better choice than SharePoint. It really scales better than anything out there.
For smaller companies with in-house development teams, there is still a huge hole in this market and it will be filled by someone. Very few users will get to see how "friendly" it is if the IT people determine that it's too much of a pain to develop on. Not only that, but very few IT teams have the expertise required to do this properly anyway. Training in this area has been pretty non-existent up to recently. Some months back, I attended a MOSS boot camp which was one of the very first in the United States. It focused only on the installation and administration of SharePoint. If you are a developer like myself, then for the most part, you are pretty much on your own.
The Bottom Line:
* You have to develop locally for debugging and testing
* You need SharePoint/WSS on your local machine
* SharePoint/WSS needs Windows Server 2003 SP1 to run
* You need to use virtual machine(s) locally to host Windows Server 2003
* You may need to bump up RAM (4 Gb) and possibly add a fast external drive
* TFS support will not be available until the Orcas release
Everything You Need to Know:
* Use Virtual PC Differencing Disks to your Advantage
* MOSS 2007 Development - Virtual Server Set Up
* Team-Based Development in MOSS
* Development Tools and Techniques for Working with Code in WSS 3.0
* How to Create a MOSS 2007 VPC Image - the Whole 9 Yards
I will be implementing the development environment myself and will report back on the outcome. I am going to set up initially with Virtual PC VHDs and difference files on a test machine and work from there. Eventually, I will deploy to a staging server which will be an exact duplicate of the live server. As for source control, I will have to wait until the Orcas release and start all over again. There is a TFS Beta 1 available for those who are interested. Note that you can still save your Web Parts in the current version of TFS. There are hacks to getting the current version of TFS to work with MOSS solutions but it's supposed to be so involved as to not warrant the time spent on it. Also, some people are saying that VMWare is faster that Virtual PC (Virtual Server if you have the license) but I am trying to get some benchmarks to support this. I can't wait to actually sit down and build something!
On a practical level, yes. And with a rack of new technologies in the offing from Microsoft, we are sure to have a far more enjoyable programming experience in the near future than we have had up to now.
For almost three decades, object-oriented programmers have had to deal with the impedance mismatch issue. In essence, the programs we use to access our databases are object-oriented while the databases themselves are relational. There is not a direct mapping between them. And while much of it is fairly obscured through the use of the ADO.NET API, it would be fair to say that the two models are as similar as chalk and cheese.
In middle to large-sized projects, we typically use entity objects as object-oriented views of our relational data. The biggest problem has always been establishing the mapping between the two. This often entailed the use of third party tools such as NHibernate. An alternative approach was object-relational databases which I never did like.
Enter the Magic Triumvirate of LINQ, Orcas and the Entity Framework! This set of technologies promises to eliminate the impedance mismatch between the various data models and programming languages. With LINQ, we now have rich queries built right into the language and can access the data source whether it's XML, relational or objects. Orcas, the latest version of Visual Studio is currently available for download as a CTP release. The best bet may be to install the virtual PC version. I'm planning on doing this myself this weekend as I really want to get the feel of the new LINQ syntax. The only sour note in all of this is that the final version of Entity Framework won't be available until after the Orcas release. In the meantime, you can download the new CTP version of the Entity Framework.
Here's a simple LINQ query taken from 101 LINQ Samples:
public void Linq()
List products = GetProductList();
var productNames = from p in products
foreach (var productName in productNames)
I'm not trying to re-ignite this age-old debate but am merely using it as a starting point to
explore the data access options available to date and where we may be heading in the near
future. In the next post, I will look at multi-tier design and the use of custom
entity class objects. Later on, we may look at that new animal called LINQ. I recommend the following as an introduction to ADO.NET:
Best Practises for Using ADO.NET
Working with Data in ASP.NET 2.0
Many programmers mistakenly view the DataSet as the only real option
for data access as far as Web applications are concerned. We often
choose between DataReader and DataSet based either on our familiarity
with the syntax of one over the other or our lack of understanding of
the basic differences between them. More often than not, we usually end
up using the wrong data access method for the wrong reasons. People
typically see it as a simple choice: the DataReader for speed and the
DataSet for data manipulation. In reality, the choice involves a whole
range of trade-offs. As an overview, here are the basic characteristics
of the both DataReader and the DataSet:
* Forward-only, read-only access
* One row at a time is stored in memory and either written over or discarded
* Light on resources such as IIS
* Cannot be persisted to cache or session
* Holds on to the data connection
* Can navigate backwards and forwards
* Stores all data in memory
* More intense use of IIS and memory resources
* Connections closed immediately the data is gathered
* Relational-data-aware; can consist of collections of related tables
* Can make updates back to the database
* Data can be stored in session
Things We Tend to Forget:
* To close the DataReader and/or Connection
* Data-bound controls keep their own copy of any data to which they are bound
* To use the DataReader's HasRows and IsDBNull properties to avoid errors
* To use the DataReader for simple data-binding where no caching of data is necessary
* With DataSets, primary keys and relationships have to be re-created in code
* DataSets support data transactions and data filtering
* Unlike DataReaders, DataSets support binding to multiple controls
* DataSets can be used to manipulate the data as XML
If there is one lesson to be learned here, it is not to make snap
decisions when it comes to choosing between these two data access
models. You need to think it through each and every time you are
accessing the database and binding to a control. The DataReader may be
fast but it doesn't support binding to multiple controls. So, you won't
gain much mileage from trying to use it to sort and filter with rich
controls. The DataSet is particularly useful when used to
intermittently connect to the database as with salespeople on the road.
The data can be serialized to XML and stored offline.
I mentioned that data-bound controls keep their own copy of the data
to which they are bound: by my calculations, that makes for three
distinct copies of the data when using a DataSet; any questions?!!