Getting List Schema and Importing List Items/Documents to SharePoint Remotely





5.00/5 (7 votes)
Demonstrates retrieving schema and importing to SharePoint via Web Services and RPC.
Introduction
As SharePoint becomes more popular, many people are building applications to index and upload their documents into SharePoint. Using the object model to upload documents is pretty simple to understand, but what if you want to build an application that needs to upload from a client machine? Well, what I've done here is cut your time in half and build a library that will help you retrieve SharePoint schema and upload your documents with the ability to do foldering. Now, all you'll have to do is write the code that generates the document and its related meta-data. For learning purposes, I've created an example application that connects to an Excel 2003 document and uploads the referenced documents inside of it.
Background
SharePoint provides a slew of Web Services, but the one that we are going to use is Lists.asmx. This Web Service can be accesses through http(s)://<yourserver>/<site|subsite>/_vti_bin/lists.asmx. This service gives the ability to pull schema, upload, delete, and modify documents.
Using the Code
As I said before, I've created an example application that uploads documents that are referenced in an Excel 2003 document. This application will also upload items to lists that do not reference documents. So, let's go ahead and take a look at my Excel 2003 document. Yeah, this is a document that I put together. The ones that I get from my co-workers don't really look as neat.
What we have here is five files that need to be uploaded. They have the meta-data fields, Title
and Region
, and each needs to be placed into its regional folder. Let's just say that our company has different branches, and they want to place their documents in there own folders, so they don't get theirs confused with other branches.
What I've done is created a document library that has foldering turned on, and I also added the Region column. The image below is the default view for the library:
Alright, now that I have the library created, let's go ahead and run SPExcelImport.
In the SharePoint settings, you'll want to punch in the URL to the site you want to connect to. Keep in mind that Web Services work in relation to the URL you use to connect to them. If you want to get information back for lists in subsites, then you'll need to put in the full path to that subsite. For example, http(s)://<yourserver>/<sitecollection>/<subsite>. After you put it in, you can click Go. When you click Go, all of the lists schema will be pulled back via the Web Services, and you'll get a list of lists to choose from in the drop down. Let's take a look at what happens in the code when the Go button is clicked.
First, we are going to create an instance of SPRepositoryProvider
and SPRepositoryRequest
.
SPRepositoryProvider provider = new SPRepositoryProvider();
SPRepositoryRequest request = new SPRepositoryRequest();
request.Uri = tboxSharePointUrl.Text;
request.Credentials = CredentialCache.DefaultNetworkCredentials;
Now, we can go ahead and call the method that retrieves the SPRepository
object. This object contains a collection, Lists
. We can then loop through these lists and populate our library combo box.
SPRepository repository = provider.GetRepository(_spRequest).Repository;
foreach (SPList lib in spRepository.Lists.Values)
cboLibrary.Items.Add(lib);
Once the user picks their library, we'll want to fill the content type combo box so they can choose which one they want to use.
foreach (SPContentType contentType in lib.ContentTypes.Values)
cboContentType.Items.Add(contentType);
Within the SPContentType
object, you'll have access to all the columns. After you select your Excel file, you'll get the ability to map your columns. The form then binds the collection of SPColumn
s to the DataGrid
.
Once you have the SPColumn
object, you'll gain access to all of the following properties:
All of this information is very useful for indexing purposes. Like, making sure your users are punching valid data.
After you've configured your mappings, you can go ahead and click Start.
When the Start button is clicked, we'll launch a new thread that does the processing. It will loop through each row in the Excel file and retrieve our meta data values and upload our document. This can all be done easily through the SPList
object that was retrieved earlier. As an overview, let's take a look at all of our properties and methods for the SPList
object.
As you can see, there is a lot of information that can be helpful to you. OK, so let's take a look at some actual code that uses SPList
.
In ExecuteProcess
, we'll go through each row in the Excel document and create an SPItem
from SPList
using SPList.CreateNewItem(SPContentType)
. This method will return us an SPItem
that has all the properties defaulted off of the schema. With that, we'll set its Folder, Properties, and Binary properties. Then, we can update the item, and we are done. The document is now in SharePoint. If we do not want to check in the document, we can set the SPUpdateItemRequest
object's CheckIn
property to false
.
private void ExcecuteProcess()
{
try
{
DataTable dsItems = GetItems();
_rwlStat.AcquireWriterLock(Timeout.Infinite);
_processedItemsCount = 0;
_erroredItemsCount = 0;
_totalItemsCount = 0;
_totalItemsCount = dsItems.Rows.Count;
if (_rwlStat.IsWriterLockHeld)
_rwlStat.ReleaseWriterLock();
foreach (DataRow dr in dsItems.Rows)
{
try
{
if (_isStop)
return;
SPList list = _context.ContentType.List;
SPItem item = list.CreateNewItem(_context.ContentType);
SetFields(item, dr);
SetFolder(item, dr);
if (item.ContentType.List.BaseType == SPListBaseType.DocumentLibrary)
SetBinary(item, dr);
list.UpdateItem(new SPUpdateItemRequest() { Item = item });
_rwlStat.AcquireWriterLock(Timeout.Infinite);
_processedItemsCount++;
ProcessStatusInfo psi = new ProcessStatusInfo();
psi.ErroredItemsCount = _erroredItemsCount;
psi.ProcessedItemsCount = _processedItemsCount;
psi.TotalItemsCount = _totalItemsCount;
if (_rwlStat.IsWriterLockHeld)
_rwlStat.ReleaseWriterLock();
_context.SyncContext.Post(_context.UpdateStatusCallback, psi);
}
catch (Exception ex)
{
_rwlStat.AcquireWriterLock(Timeout.Infinite);
_erroredItemsCount++;
if (_rwlStat.IsWriterLockHeld)
_rwlStat.ReleaseWriterLock();
ProcessStatusInfo psi = new ProcessStatusInfo();
psi.ErroredItemsCount = _erroredItemsCount;
psi.ProcessedItemsCount = _processedItemsCount;
psi.TotalItemsCount = _totalItemsCount;
_context.SyncContext.Post(_context.UpdateStatusCallback, psi);
UpdaterLog.LogException(ex);
}
}
}
catch (Exception ex)
{
Logging.UpdaterLog.LogException(ex);
_isFaulted = true;
}
finally
{
ProcessStatusInfo psi = new ProcessStatusInfo();
psi.Finished = true;
psi.ErroredItemsCount = _erroredItemsCount;
psi.ProcessedItemsCount = _processedItemsCount;
psi.TotalItemsCount = _totalItemsCount;
_context.SyncContext.Post(_context.UpdateStatusCallback, psi);
}
}
private void SetFolder(SPItem item, DataRow dr)
{
if (_context.FolderColumn == null)
return;
string folder = dr[_context.FolderColumn.Name].ToString();
item.Folder = folder.Trim('/', '\\').Replace('\\','/');
}
private void SetBinary(SPItem item, DataRow dr)
{
if (_context.FileColumn == null)
throw new Exception("File Field is not set.");
FileInfo fi = new FileInfo(dr[_context.FileColumn.Name].ToString());
if (!fi.Exists)
throw new Exception(string.Format("File '{0}' could not be found.",
fi.FullName));
item.Name = Path.GetFileNameWithoutExtension(fi.Name);
item.Extension = fi.Extension.Trim('.');
item.Binary = File.ReadAllBytes(fi.FullName);
}
private void SetFields(SPItem item, DataRow dr)
{
foreach (ColumnMapping cm in _context.ColumnMapping)
{
SPColumn col = item.ContentType.Columns[cm.SPColumn];
if (item.Properties.ContainsKey(col.Name))
item.Properties[col.Name] = dr[cm.ExColumn].ToString();
}
}
private DataTable GetItems()
{
StringBuilder sbCols = new StringBuilder();
List<string> selectColumns = new List<string>();
for (int i = 0; i < _context.ColumnMapping.Count; i++)
{
string excelColName = _context.ColumnMapping[i].ExColumn;
if(selectColumns.Contains(excelColName.ToLower()))
continue;
else
selectColumns.Add(excelColName.ToLower());
if (i != 0)
sbCols.Append(", ");
sbCols.Append("[");
sbCols.Append(excelColName);
sbCols.Append("]");
}
if (_context.FileColumn != null &&
!selectColumns.Contains(_context.FileColumn.Name.ToLower()))
{
if (sbCols.Length != 0)
sbCols.Append(", ");
sbCols.Append("[");
sbCols.Append(_context.FileColumn.Name);
sbCols.Append("]");
selectColumns.Add(_context.FileColumn.Name);
}
if (_context.FolderColumn != null &&
!selectColumns.Contains(_context.FolderColumn.Name.ToLower()))
{
if (sbCols.Length != 0)
sbCols.Append(", ");
sbCols.Append("[");
sbCols.Append(_context.FolderColumn.Name);
sbCols.Append("]");
selectColumns.Add(_context.FolderColumn.Name);
}
using (OleDbConnection conn =
new OleDbConnection(_context.ExDatabaseRequest.ConnectionString))
{
OleDbCommand cmd = new OleDbCommand();
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
cmd.CommandText = string.Format(@"SELECT {0} FROM [{1}]",
sbCols.ToString(), _context.Table.QueryName);
OleDbDataAdapter oleDataAdapter = new OleDbDataAdapter(cmd);
DataTable dt = new DataTable();
oleDataAdapter.Fill(dt);
return dt;
}
}
Once our docs are uploaded, we'll see our Test folder. Click it, and you'll see:
Now, we can click on our Mid West folder and see our document.
Conclusion
I've done a lot of work with importing items to SharePoint, and hopefully, this library can speed things up for you. If you have any questions, feel free to ask. If you generate any applications that import to SharePoint using this code, all I ask is please post it and share the wealth.