It is easy to find WSS ASP.NET samples for Instantiation and Task Edit forms, but it is hard to find sample for Modification form. Maybe many people think there is no much difference, but is that true?
Modification form is quite useful in WSS work flow. For example, after a long running task has been assigned to an analyst, the analyst takes vacation for weeks. At this point, the Modification form can be used to reassign the task to another person.
You can define Modification form in Feature file with the similar way for Instantiation form. But how about the workflow part?
Instantiation form happens at the very beginning of a workflow, but Modification can happen anywhere in a scope.
A common way to define Modification in workflow is to put EnableWorkflowModification activity inside an EventHandlingScope activity. Other activities inside the same scope do the normal work. When Modification form is accessed, an event will raised to the workflow, so that the EventHandlingScope activity can handle the event and call event handler.
In Visual Studio, right-click the EventHandlingScope activity and select "view event handlers":
Then you can add OnWorkflowModified and UpdateTask activities to the EventHandlersActivity:
The Modification activities have their own ContextData and CorrelationToken, because they are actually outside the normal workflow.
One issue I saw: If there is an OnTaskChanged activity already defined in the EventHandlingScope (like below), you had better not add another OnTaskChanged activity inside the EventHandlersActivity; otherwise, you would be surprised to see only one OnTaskChanged activity is called.
I tried to update a content type in WSS, but the system was still using the older version of content type somehow. After two-day frustration, I found out why :(
Here was what happened in the beginning: I created a simple content type like below:
<ContentType ID="0x01080100B7336179CFFE43e59B86E241C767010E" Name="GradingTask" Group="Grading" Description="Grading Task" Version="0" Hidden="FALSE">
</ContentType>
The content type worked perfectly and I added several items of GradingTask type to a list. Then I thought: How about adding custom Edit/Display pages? So I added these lines to the configuration:
<XmlDocuments>
<XmlDocument NamespaceURI="http://schemas.microsoft.com/sharepoint/v3/contenttype/forms/url">
<FormUrls xmlns="http://schemas.microsoft.com/sharepoint/v3/contenttype/forms/url">
<Edit>_layouts/Grades/GradesTaskEditForm.aspx</Edit>
<Display>_layouts/Grades/GradesTaskEditForm.aspx</Display>
</FormUrls>
</XmlDocument>
</XmlDocuments>
After I installed the updated content type and tried to edit a task, WSS kept showing me the default EditForm.aspx page, not the GradesTaskEditForm.aspx. Then I tried to deactivate and uninstall the content type for many times, but WSS still used the default page.
Finally, I saw a line in the "Real World SharePoint 2007" book: When you use Feature to install a new version of Content Type, WSS does not support cascading update if inherited content type is used somewhere. (No quote)
Oh ... That is the reason why I failed to overwrite the old content type definition -- WSS kept GradingTask's meta data for the list even after the Feature was uninstalled.
So I deleted all items of that GradingTask type, removed workflow setting, detached the content type from the list, deactivated/uninstalled the feature, deleted the list and then reinstalled/activated the feature, ... ... finally my lovely custom Edit page showed up :)
According to the book, to support cascading update I need write code using WSS object model API to force cascading update ... I would try that later.
[Updated 12/28/2007] I saw this great article this morning to deal with the mess of Content Type inheritance.
Here was what happened in the beginning: I created a simple content type like below:
<ContentType ID="0x01080100B7336179CFFE43e59B86E241C767010E" Name="GradingTask" Group="Grading" Description="Grading Task" Version="0" Hidden="FALSE">
</ContentType>
The content type worked perfectly and I added several items of GradingTask type to a list. Then I thought: How about adding custom Edit/Display pages? So I added these lines to the configuration:
<XmlDocuments>
<XmlDocument NamespaceURI="http://schemas.microsoft.com/sharepoint/v3/contenttype/forms/url">
<FormUrls xmlns="http://schemas.microsoft.com/sharepoint/v3/contenttype/forms/url">
<Edit>_layouts/Grades/GradesTaskEditForm.aspx</Edit>
<Display>_layouts/Grades/GradesTaskEditForm.aspx</Display>
</FormUrls>
</XmlDocument>
</XmlDocuments>
After I installed the updated content type and tried to edit a task, WSS kept showing me the default EditForm.aspx page, not the GradesTaskEditForm.aspx. Then I tried to deactivate and uninstall the content type for many times, but WSS still used the default page.
Finally, I saw a line in the "Real World SharePoint 2007" book: When you use Feature to install a new version of Content Type, WSS does not support cascading update if inherited content type is used somewhere. (No quote)
Oh ... That is the reason why I failed to overwrite the old content type definition -- WSS kept GradingTask's meta data for the list even after the Feature was uninstalled.
So I deleted all items of that GradingTask type, removed workflow setting, detached the content type from the list, deactivated/uninstalled the feature, deleted the list and then reinstalled/activated the feature, ... ... finally my lovely custom Edit page showed up :)
According to the book, to support cascading update I need write code using WSS object model API to force cascading update ... I would try that later.
[Updated 12/28/2007] I saw this great article this morning to deal with the mess of Content Type inheritance.
Nowadays, it is quite easy to find articles about how beautiful it is to use Microsoft SharePoint 2007 Workflow and InfoPath to manage Documents or List.
But I am too poor to buy MOSS 2007 and InfoPath, is it possible to use WSS 3.0 Workflow frame to manage normal business objects (e.g. purchase orders, law suit cases) in an ASP.NET SPGridView on a WSS 3.0 site? Or to say, can I build a WSS Workflow and related ASP.NET pages (Workflow Association, Instatiation, Modification, and Task Edit pages) in Visual Studio, then use that workflow for each item of an ASP.NET SPGridView and modify data stored in a standalone database (not WSS Content database)?
From my current understanding, the answer is: No, because SharePoint Workflow processes SPListItem, not item of SPGridView; and of course, you are not supposed to create task from your code because the task is of SPWorkflowTask type that can only be created from SPWorkflow.
Ok ... fine. How about creating a SharePoint List to hold my business objects but I need save my business objects in a separate database?
Well ... To use SharePoint List, you have to create list columns inside SharePoint Content database, not in other databases.
&*%*^%%^$
So from my current understanding, if I want to use WSS 3.0 workflow framework for my own business objects in a standalone database (S_DB), I have to create a List and several columns (for display and search purposes) in WSS site. Those columns are duplicated because they are already defined in database S_DB.
When user starts workflow on the list item, the workflow (Instatiation) ASP.NET page will use ADO.NET code to load other fields from database S_DB. Then the ASP.NET page should save updated data back to S_DB, and at the same time save the data of the List columns to WSS Content database too.
Is there a better way to avoid data duplication in WSS Content database and the other standalone database?
But I am too poor to buy MOSS 2007 and InfoPath, is it possible to use WSS 3.0 Workflow frame to manage normal business objects (e.g. purchase orders, law suit cases) in an ASP.NET SPGridView on a WSS 3.0 site? Or to say, can I build a WSS Workflow and related ASP.NET pages (Workflow Association, Instatiation, Modification, and Task Edit pages) in Visual Studio, then use that workflow for each item of an ASP.NET SPGridView and modify data stored in a standalone database (not WSS Content database)?
From my current understanding, the answer is: No, because SharePoint Workflow processes SPListItem, not item of SPGridView; and of course, you are not supposed to create task from your code because the task is of SPWorkflowTask type that can only be created from SPWorkflow.
Ok ... fine. How about creating a SharePoint List to hold my business objects but I need save my business objects in a separate database?
Well ... To use SharePoint List, you have to create list columns inside SharePoint Content database, not in other databases.
&*%*^%%^$
So from my current understanding, if I want to use WSS 3.0 workflow framework for my own business objects in a standalone database (S_DB), I have to create a List and several columns (for display and search purposes) in WSS site. Those columns are duplicated because they are already defined in database S_DB.
When user starts workflow on the list item, the workflow (Instatiation) ASP.NET page will use ADO.NET code to load other fields from database S_DB. Then the ASP.NET page should save updated data back to S_DB, and at the same time save the data of the List columns to WSS Content database too.
Is there a better way to avoid data duplication in WSS Content database and the other standalone database?
In one of my projects, I need analyze data in an Access application. Frankly speaking, that is the best Access application I have seen so far: with many front-end Access forms, users can input data to the back-end single Access database.
But Access is not designed for client-server mode anyway. A lock file is needed to only allow one user to lock a table at one time. Sometimes, users have to wait for minutes for others to finish a simple data update operation.
For me, I am not a fan of Access, although I was amazed by how that Access application worked. So I decide to convert to SQL Server database.
But how? Although SQL Server Integration Service (SSIS) can import data from Access, it is hard to maintain database settings (e.g. foreign-key relationship, etc). Today, I found SQL Server Migration Assistant for Access from Microsoft. Now my life is easier :)
But Access is not designed for client-server mode anyway. A lock file is needed to only allow one user to lock a table at one time. Sometimes, users have to wait for minutes for others to finish a simple data update operation.
For me, I am not a fan of Access, although I was amazed by how that Access application worked. So I decide to convert to SQL Server database.
But how? Although SQL Server Integration Service (SSIS) can import data from Access, it is hard to maintain database settings (e.g. foreign-key relationship, etc). Today, I found SQL Server Migration Assistant for Access from Microsoft. Now my life is easier :)
I scratched my head this morning for a WCF service host program. It threw a very generic exception like this:
“The communication object, System.ServiceModel.ServiceHost, cannot be used for communication because it is in the Faulted state.”
The exception above had no any useful information about where the real problem was. The logic of my program was simple:
When I looked inside Output of VS 2005, I saw this log information:
'System.ServiceModel.AddressAlreadyInUseException'
But why WCF did not give me that exception directly? Finally, this blog explains the reason: Why "using" is bad for your WCF service host.
------------------------------------------
The configuration exception is thrown by the “Host.Open()” line, the code jumps into the finally block and tries to dispose the host. Here the host is not null but it is in a faulty state, this means that it cannot be disposed and this raises the second exception that you usually see on your application.
The lesson learn is “do not use ‘using’ to host your WCF service”.
------------------------------------------
“The communication object, System.ServiceModel.ServiceHost, cannot be used for communication because it is in the Faulted state.”
The exception above had no any useful information about where the real problem was. The logic of my program was simple:
using(ServiceHost host = new ServiceHost(
typeof(MyService), new Uri("http://localhost:8080/MyService"))
{
host.Open();
... ...
}
When I looked inside Output of VS 2005, I saw this log information:
'System.ServiceModel.AddressAlreadyInUseException'
But why WCF did not give me that exception directly? Finally, this blog explains the reason: Why "using" is bad for your WCF service host.
------------------------------------------
ServiceHost Host = null;
try
{
Host = new ServiceHost(MySingletonService);
Host.Open();
Console.ReadKey();
Host.Close();
}
finally
{
if (Host != null)
((IDisposable)Host).Dispose();
}
The configuration exception is thrown by the “Host.Open()” line, the code jumps into the finally block and tries to dispose the host. Here the host is not null but it is in a faulty state, this means that it cannot be disposed and this raises the second exception that you usually see on your application.
The lesson learn is “do not use ‘using’ to host your WCF service”.
------------------------------------------
I did not read technical books for weeks after I finished my project (which is rare for me), because I was reading/studying a more important thing for our daily life: traditional Chinese medicine, that can even cure some so-called "incurable" diseases, such as cancers, or AIDS!
Frankly speaking, I had been disappointed by Chinese doctors for a long long time since I was a child. My previous impression was Chinese medicine/method were too slow for illness, until recently I began to know several real Chinese doctors and their treatments.
One doctor is Mr. Ni Haisha (Please don't be cheated by his English site for Acupuncture, that is only because USA has no license for Chinese medicine yet). His Chinese site has far more great information about why/how about cancers than his English site.
Chinese medicine takes our body as a whole system. For example, the root reason for the breast cancer is because heart and small intestine are weak. But Western medicine takes our body as separate items. That is the reason why Western medicine does not work.
If Chinese medicine is so good, but why the medicine from many Chinese doctors is not effective, and even many Chinese doctors take western medicine by themselves for illness? The reasons are quite complex. Basically, Chinese medicine is like "art", not many good doctors exist; many Chinese doctors learned in a wrong way; Chinese government admired western technology too much and ignored real jewels/wisdom in the long Chinese history ...
How to find a good doctor? If your feet get warm after you take Chinese medicine, then that doctor is a good doctor; otherwise, if you still feel cold in your feet, and your symptoms still exist, then that doctor is likely a fake Chinese doctor.
Currently, I know two Chinese doctors are good: Mr. Ni and Mr. Huo
Frankly speaking, I had been disappointed by Chinese doctors for a long long time since I was a child. My previous impression was Chinese medicine/method were too slow for illness, until recently I began to know several real Chinese doctors and their treatments.
One doctor is Mr. Ni Haisha (Please don't be cheated by his English site for Acupuncture, that is only because USA has no license for Chinese medicine yet). His Chinese site has far more great information about why/how about cancers than his English site.
Chinese medicine takes our body as a whole system. For example, the root reason for the breast cancer is because heart and small intestine are weak. But Western medicine takes our body as separate items. That is the reason why Western medicine does not work.
If Chinese medicine is so good, but why the medicine from many Chinese doctors is not effective, and even many Chinese doctors take western medicine by themselves for illness? The reasons are quite complex. Basically, Chinese medicine is like "art", not many good doctors exist; many Chinese doctors learned in a wrong way; Chinese government admired western technology too much and ignored real jewels/wisdom in the long Chinese history ...
How to find a good doctor? If your feet get warm after you take Chinese medicine, then that doctor is a good doctor; otherwise, if you still feel cold in your feet, and your symptoms still exist, then that doctor is likely a fake Chinese doctor.
Currently, I know two Chinese doctors are good: Mr. Ni and Mr. Huo
It may be obvious to many people: if # is included in URL, it refers to a relative location of current HTML page in browser. For example, URL "http://somewhere.com/home.htm#section1" refers to ID "section1" on "home.htm" page. So what's the point to mention again here?
The interesting thing happens when the URL is used between ASP.NET web services: The IIS server side cannot get the whole URL if a client sends URL with #. Basically, the ASP.NET web service can only get the front part of URL. Everything behind # will not be available to web service.
So if web service client wants to send parameter ("#1", "#2", or "#3") to web service, sorry, the service side cannot see that parameter.
A bug in my recent project was related with this issue: a telephony system sent dynamic URL to web services, sometimes with # in the URL.
The interesting thing happens when the URL is used between ASP.NET web services: The IIS server side cannot get the whole URL if a client sends URL with #. Basically, the ASP.NET web service can only get the front part of URL. Everything behind # will not be available to web service.
So if web service client wants to send parameter ("#1", "#2", or "#3") to web service, sorry, the service side cannot see that parameter.
A bug in my recent project was related with this issue: a telephony system sent dynamic URL to web services, sometimes with # in the URL.
We were satisfied with Crystal Report (CR) to generate PDF on the fly until we put CR templates on production servers.
The CR templates were tested on DEV and QA servers without any problem. But when the templates were put onto production box, we were amazed how messy the generated PDF looked like: the font size were mysteriously changed and paragraphs overlapped! @^@
I searched for reasons on Internet for hours, until I found one possible answer here: Typically when you are seeing page formatting issues on different machines, it could be because of printer drivers (or lack of). The reporting engine relies on the printer driver configured on the machine to provide information so that a page can be properly rendered. If you designed the report on your dev machine which is using PrinterA and then deploy to another machine using PrinterB, the formatting could be off.
My program was a .NET web service to create PDF document using ExportToDisk not PrintToPrinter:
so I wondered if the printer was the real problem. After I was told the production server pointed to the exact same printer as QA server, I reluctantly asked IT to check version of printer driver on both servers.
Then ... IT told me that the servers had different versions of printer driver even they pointed to same printer. After IT installed the same latest drivers on servers, the formating issue was resolved. :)
Although we had to postpone production delivery, it's good to know some software use printer driver to arrange layout internally.
The CR templates were tested on DEV and QA servers without any problem. But when the templates were put onto production box, we were amazed how messy the generated PDF looked like: the font size were mysteriously changed and paragraphs overlapped! @^@
I searched for reasons on Internet for hours, until I found one possible answer here: Typically when you are seeing page formatting issues on different machines, it could be because of printer drivers (or lack of). The reporting engine relies on the printer driver configured on the machine to provide information so that a page can be properly rendered. If you designed the report on your dev machine which is using PrinterA and then deploy to another machine using PrinterB, the formatting could be off.
My program was a .NET web service to create PDF document using ExportToDisk not PrintToPrinter:
oDocument.ExportToDisk(
ExportFormatType.PortableDocFormat, sOutputFile);
so I wondered if the printer was the real problem. After I was told the production server pointed to the exact same printer as QA server, I reluctantly asked IT to check version of printer driver on both servers.
Then ... IT told me that the servers had different versions of printer driver even they pointed to same printer. After IT installed the same latest drivers on servers, the formating issue was resolved. :)
Although we had to postpone production delivery, it's good to know some software use printer driver to arrange layout internally.
In the wonderful "Zero to Biztalk Weekend", Biztalk expert Geoff Snowman gave us two-day FREE hands-on labs and demos about Biztalk 2006 (and R2)! The hands-on labs were well designed and the lab document was in great details.
Biztalk is a useful product to integrate systems together. From concept, it receives messages from Receive Adapter/Port, processes messages in Orchestration (optional), and sends out messages using Send Adapter/Port. Biztalk can run long-running transactions, which is very important for real world business.
Below are the agenda and my comments for the labs:
Saturday (06/02/2007)
1. Architecture and Content-Based Routing: Deciding Where to Send a Message
This hello-world type XCopy lab uses File adapter to receive and send files without transformation. This is a good introduction of Biztalk adapter concept.
2. The BizTalk Mapper: Transforming Between Message Formats
Biztalk uses XML intenally to represent messages. When Biztalk integrates multiple systems, it is necessary to transform different data schemas into one internal schema; after business process, Biztalk will transform internal XML to according external schema.
But how to deal with non-XML input, such as flat file? Well, that is a topic in the second-day lab.
3. The SQL and FTP Adapters: Sending Messages to Databases and IIS
FTP adapter has the same logic with normal FTP client software, which is easy to use.
SQL adapter is a little complex: to map Biztalk XML schema to database table or stored procedure parameters. Fortunately there is a wizard to generate the interesting SQL schema.
4. Creating a Simple Business Process. Publishing a Business Process as a Web Service. Using the SOAP Adapter
Orchestration with complex workflow can be published as a normal web service. Unlike using File adapter where Biztalk checks file system periodically for new file, Web Service request can go directly into Message Box without waiting for Biztalk polling. I believe Biztalk server uses similar machenism of SQL Server Notification Service to notify an Orchestration a request is coming.
5. Correlation: Which Instance of My Business Process Sees My Incoming Message?
Correlation has been one of exciting build-in features of Biztalk for a long time. Correlation is normally used to match responses with proper original requests sent out by Orchestration. You do not need write code for correlation.
Sunday (06/03/2007)
1. The Flat File Wizard and the Pipeline Designer: Dealing with Text Files.
Biztalk 2006 has a new Wizard to convert Flat file into XML. The wizard parse a sample flat file data to let user select delimiter and generate XML schema. The wizard is easy to use.
But how about binary file? Is there a wizard to parse and generate XML schema? No, you have to write your own pipeline component to parse binary format.
2. Integrating with SharePoint and InfoPath
Biztalk has Human Workflow solution. Its name sounds good, but remember: Do not use it! The reason is we have SharePoint 2007 with built-in workflow feature. SharePoint and InfoPath are good tools for people to approve/decline messages, and Biztalk can communicate with SharePoint database.
3. The Business Rules Engine: Separating the Business Logic from the Application
Biztalk is not only used for we developers, of course. Business people has a tool to modify business rules (e.g. change pricing rate, change approve/decline rules).
4. Business Activity Monitoring: Tracking the Business Process (Demo)
BAM is valuable to business people to track activities in their own vocabulary. I am not so sure if it is built on SQL Server Reporting Service or not, but it looks similar.
Overall, the two-day training is very good to know architecture of Biztalk and have some hand-on experience. With time limit, it is hard to know the internal of Biztalk in only two days. I am waiting for level 200 or 300 training in the future.
Biztalk is a useful product to integrate systems together. From concept, it receives messages from Receive Adapter/Port, processes messages in Orchestration (optional), and sends out messages using Send Adapter/Port. Biztalk can run long-running transactions, which is very important for real world business.
Below are the agenda and my comments for the labs:
Saturday (06/02/2007)
1. Architecture and Content-Based Routing: Deciding Where to Send a Message
This hello-world type XCopy lab uses File adapter to receive and send files without transformation. This is a good introduction of Biztalk adapter concept.
2. The BizTalk Mapper: Transforming Between Message Formats
Biztalk uses XML intenally to represent messages. When Biztalk integrates multiple systems, it is necessary to transform different data schemas into one internal schema; after business process, Biztalk will transform internal XML to according external schema.
But how to deal with non-XML input, such as flat file? Well, that is a topic in the second-day lab.
3. The SQL and FTP Adapters: Sending Messages to Databases and IIS
FTP adapter has the same logic with normal FTP client software, which is easy to use.
SQL adapter is a little complex: to map Biztalk XML schema to database table or stored procedure parameters. Fortunately there is a wizard to generate the interesting SQL schema.
4. Creating a Simple Business Process. Publishing a Business Process as a Web Service. Using the SOAP Adapter
Orchestration with complex workflow can be published as a normal web service. Unlike using File adapter where Biztalk checks file system periodically for new file, Web Service request can go directly into Message Box without waiting for Biztalk polling. I believe Biztalk server uses similar machenism of SQL Server Notification Service to notify an Orchestration a request is coming.
5. Correlation: Which Instance of My Business Process Sees My Incoming Message?
Correlation has been one of exciting build-in features of Biztalk for a long time. Correlation is normally used to match responses with proper original requests sent out by Orchestration. You do not need write code for correlation.
Sunday (06/03/2007)
1. The Flat File Wizard and the Pipeline Designer: Dealing with Text Files.
Biztalk 2006 has a new Wizard to convert Flat file into XML. The wizard parse a sample flat file data to let user select delimiter and generate XML schema. The wizard is easy to use.
But how about binary file? Is there a wizard to parse and generate XML schema? No, you have to write your own pipeline component to parse binary format.
2. Integrating with SharePoint and InfoPath
Biztalk has Human Workflow solution. Its name sounds good, but remember: Do not use it! The reason is we have SharePoint 2007 with built-in workflow feature. SharePoint and InfoPath are good tools for people to approve/decline messages, and Biztalk can communicate with SharePoint database.
3. The Business Rules Engine: Separating the Business Logic from the Application
Biztalk is not only used for we developers, of course. Business people has a tool to modify business rules (e.g. change pricing rate, change approve/decline rules).
4. Business Activity Monitoring: Tracking the Business Process (Demo)
BAM is valuable to business people to track activities in their own vocabulary. I am not so sure if it is built on SQL Server Reporting Service or not, but it looks similar.
Overall, the two-day training is very good to know architecture of Biztalk and have some hand-on experience. With time limit, it is hard to know the internal of Biztalk in only two days. I am waiting for level 200 or 300 training in the future.
Yesterday I found a seems-like-simple SQL problem: It is a simple task inside SQL Server Enterprise Manager (Management Studio), but there is no simple single SQL statement to set “Is Identity” to “no” for a table column. Because the setting is not a constraint, “alter table” statement does not work.
It turns out I have to create a same temporary column, copy all data to that new column, remove the original column, and rename the temporary column back to that original name. Another way is to create a temporary table without setting "Is Identity" and copy all data ... Oh my, such a “simple” work!
But fortunately there is a tip available to generate SQL script for database schema change: In SQL Server Enterprise Manager, when you change table schema, you can let Enterprise Manager generate the schema change script for you. The most left button of this tool bar is used to “Generate change script”:
Yesterday, I used it to generate a complex script to remove “Identity” setting for a column:
Note: You should generate the script BEFORE you save the changes in EM. Otherwise, that button will be disabled.
It turns out I have to create a same temporary column, copy all data to that new column, remove the original column, and rename the temporary column back to that original name. Another way is to create a temporary table without setting "Is Identity" and copy all data ... Oh my, such a “simple” work!
But fortunately there is a tip available to generate SQL script for database schema change: In SQL Server Enterprise Manager, when you change table schema, you can let Enterprise Manager generate the schema change script for you. The most left button of this tool bar is used to “Generate change script”:
Yesterday, I used it to generate a complex script to remove “Identity” setting for a column:
-- To remove Identity setting of the Id column
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
GO
ALTER TABLE dbo.partner_attribute_name
DROP CONSTRAINT DF_partner_attribute_name_last_update_date
GO
ALTER TABLE dbo.partner_attribute_name
DROP CONSTRAINT DF_partner_attribute_name_creation_date
GO
-- Create a temp table
CREATE TABLE dbo.Tmp_partner_attribute_name
(
partner_attribute_name_id int NOT NULL,
attribute_name varchar(50) NOT NULL,
description varchar(200) NOT NULL,
last_update_date datetime NOT NULL,
creation_date datetime NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_partner_attribute_name ADD CONSTRAINT
DF_partner_attribute_name_last_update_date
DEFAULT (getdate()) FOR last_update_date
GO
ALTER TABLE dbo.Tmp_partner_attribute_name ADD CONSTRAINT
DF_partner_attribute_name_creation_date
DEFAULT (getdate()) FOR creation_date
GO
-- Copy all data to the temp table
IF EXISTS(SELECT * FROM dbo.partner_attribute_name)
EXEC('INSERT INTO dbo.Tmp_partner_attribute_name
(partner_attribute_name_id, attribute_name
, description, last_update_date, creation_date)
SELECT partner_attribute_name_id
, attribute_name, description
, last_update_date, creation_date
FROM dbo.partner_attribute_name
WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.partner_attribute_name
GO
EXECUTE sp_rename N'dbo.Tmp_partner_attribute_name'
, N'partner_attribute_name', 'OBJECT'
GO
ALTER TABLE dbo.partner_attribute_name ADD CONSTRAINT
PK_partner_attribute_name PRIMARY KEY CLUSTERED
(
partner_attribute_name_id
) ON [PRIMARY]
GO
COMMIT
Note: You should generate the script BEFORE you save the changes in EM. Otherwise, that button will be disabled.
Although I am quite busy these days for my projects at hand, I still try to find time to do what I really want: to dig into compiler, operating system, and CLR framework.
Yesterday, I spent hours to analyze a popular syntax highlighter tool - Wilco SyntaxHighter, because it is a small compiler to some degree. :)
The highlighter parses string source (the code to be highlighted) to scan tokens (comment, string, key word). Each token includes position/length information in the string source and related with highlighter style data. Then the parser reads the source again to merge those parsed tokens. The string segment of the token will be updated with style data. Other string segment will leave as-is.
The good feature of that parser is to build a scanner chain. For example, to parse C# code, these scanner will be used: CommentBlockScanner (/* ... */) -- CommentLineScanner (//) -- StringBlockScanner (@) -- StringLineScanner ("") -- WordScanner. When the current and following characters match CommentBlockScanner, the CommentBlockScanner will continue to read characters to the end of the comment block and take that block as a Comment token; if the current character does not match CommentBlockScanner, then it may match the next scanner in the scanner chain ... If the character does not match any scanner, then it does not belong to a token and should be ignored.
For different type of language (e.g. Java, CSS, etc), we can build and use different scanner chain. But the basic parsing logic is still same.
The concept of Compiler is very useful when generating code dynamically. When the theory of "Software Factory" becomes real, code generation tool will be the fundamental in the system.
Yesterday, I spent hours to analyze a popular syntax highlighter tool - Wilco SyntaxHighter, because it is a small compiler to some degree. :)
The highlighter parses string source (the code to be highlighted) to scan tokens (comment, string, key word). Each token includes position/length information in the string source and related with highlighter style data. Then the parser reads the source again to merge those parsed tokens. The string segment of the token will be updated with style data. Other string segment will leave as-is.
The good feature of that parser is to build a scanner chain. For example, to parse C# code, these scanner will be used: CommentBlockScanner (/* ... */) -- CommentLineScanner (//) -- StringBlockScanner (@) -- StringLineScanner ("") -- WordScanner. When the current and following characters match CommentBlockScanner, the CommentBlockScanner will continue to read characters to the end of the comment block and take that block as a Comment token; if the current character does not match CommentBlockScanner, then it may match the next scanner in the scanner chain ... If the character does not match any scanner, then it does not belong to a token and should be ignored.
For different type of language (e.g. Java, CSS, etc), we can build and use different scanner chain. But the basic parsing logic is still same.
The concept of Compiler is very useful when generating code dynamically. When the theory of "Software Factory" becomes real, code generation tool will be the fundamental in the system.
I wrote ASP.NET code to call a service to generate and download a PDF file. At first, I tried to use AJAX UpdatePanel in the ASP.NET code to show progress because the service call may take long time. When the service call returned, I wanted to show the PDF file directly in browser on the same ASP.NET page:
But I saw this exception message:
Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed ...
Then I realized the UpdatePanel JavaScript tried to parse the returned data for the original web page, but it received strange PDF stream instead. I found the explanation here:
The UpdatePanel control uses asynchronous postbacks to control which parts of the page get rendered. It does this using a whole bunch of JavaScript on the client and a whole bunch of C# on the server. Asynchronous postbacks are exactly the same as regular postbacks except for one important thing: the rendering. Asynchronous postbacks go through the same life cycles events as regular pages (this is a question I get asked often). Only at the render phase do things get different. We capture the rendering of only the UpdatePanels that we care about and send it down to the client using a special format. In addition, we send out some other pieces of information, such as the page title, hidden form values, the form action URL, and lists of scripts.
It turned out that I should using Response.Redirect() to another page to show the PDF file in browser. To avoid the same exception, there is no AJAX code in the second page.
// Stream PDF kit to client
Byte[] buffer = File.ReadAllBytes(filePath);
base.Response.Clear();
base.Response.ContentType = "application/pdf";
int length = buffer.Length;
base.Response.AddHeader("Accept-Header", length.ToString());
base.Response.AddHeader("Content-Length", length.ToString());
base.Response.OutputStream.Write(buffer, 0, buffer.Length);
base.Response.Flush();
base.Response.End();
But I saw this exception message:
Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed ...
Then I realized the UpdatePanel JavaScript tried to parse the returned data for the original web page, but it received strange PDF stream instead. I found the explanation here:
The UpdatePanel control uses asynchronous postbacks to control which parts of the page get rendered. It does this using a whole bunch of JavaScript on the client and a whole bunch of C# on the server. Asynchronous postbacks are exactly the same as regular postbacks except for one important thing: the rendering. Asynchronous postbacks go through the same life cycles events as regular pages (this is a question I get asked often). Only at the render phase do things get different. We capture the rendering of only the UpdatePanels that we care about and send it down to the client using a special format. In addition, we send out some other pieces of information, such as the page title, hidden form values, the form action URL, and lists of scripts.
It turned out that I should using Response.Redirect() to another page to show the PDF file in browser. To avoid the same exception, there is no AJAX code in the second page.
Microsoft Workflow Foundation (WF) is designed to build a kind of Domain Specific Language. You can take WF as a higher level programming language. For example, it has "while" and "if" statements (activities). All the statements run sequentially.
At this time, WF is good for back end process, but not for (web) user interface. Why? Because WF does not support "Back" button. Let's suppose a web application using WF to control page flow. It is very common for user to input data across pages and want to go "back" to previous pages to change data. WF does not provide built-in mechanism to do that.
How to implement "Back" logic in WF? WF needs a stack to save states for previous activities. When user clicks "Back" button, WF should pop up previous state from stack and continue the "previous" work.
I hope the new version of ASP.NET integrated with WF can have that "Back" button feature.
At this time, WF is good for back end process, but not for (web) user interface. Why? Because WF does not support "Back" button. Let's suppose a web application using WF to control page flow. It is very common for user to input data across pages and want to go "back" to previous pages to change data. WF does not provide built-in mechanism to do that.
How to implement "Back" logic in WF? WF needs a stack to save states for previous activities. When user clicks "Back" button, WF should pop up previous state from stack and continue the "previous" work.
I hope the new version of ASP.NET integrated with WF can have that "Back" button feature.
Crystal Report .NET can bind .NET object to its report template, which I took for granted for a long time until I really get my hands dirty these days.
The question is: could you bind composited object in Crystal Report .NET? For example, an Employee class can be like this:
Address and Salary are also classes.
How can I bind an Employee object to Crystal Report to show detailed Address and Salary information? It turns out Crystal Report .NET can only access the top level properties of an object (only m_name in this case)! Crystal Report .NET is too lazy to dig into the object hierarchy.
Ugly solution? You should create a flat class including all properties of all children classes! A mapping method to fill data to the flat class is needed of course.
In my current project, I have a class that is composited with many other child classes. If I make all fields flat in one class, there will be more than one hundred properties.
Except that I have to spend days to create flat classes, Crystal Report also gives me a big task from now on: to keep the flat classes synchronized with those hierarchical classes.
The question is: could you bind composited object in Crystal Report .NET? For example, an Employee class can be like this:
public class Employee
{
public string m_name;
public Address m_address;
public Salary m_salary;
}
Address and Salary are also classes.
How can I bind an Employee object to Crystal Report to show detailed Address and Salary information? It turns out Crystal Report .NET can only access the top level properties of an object (only m_name in this case)! Crystal Report .NET is too lazy to dig into the object hierarchy.
Ugly solution? You should create a flat class including all properties of all children classes! A mapping method to fill data to the flat class is needed of course.
In my current project, I have a class that is composited with many other child classes. If I make all fields flat in one class, there will be more than one hundred properties.
Except that I have to spend days to create flat classes, Crystal Report also gives me a big task from now on: to keep the flat classes synchronized with those hierarchical classes.
Chinese New Year gala has been a "traditional" program for more than 1 billion people for about 20 years. I like martial art programs every year. This year, the martial art program is so beautiful that lots of young Chinese want to learn Taiji after watching the performance.
Eight national and international Taiji champions showed their GongFu. This is the international champion Zhou Bin:
This is the link for the Taiji (It looks like Chen-style Taiji) and dance part of 2007 Chinese New Year gala:
http://www.youtube.com/watch?v=6hpYtc0BLZ8
WhileActivity is a special activity in WF when ActivityExecutionContext (AEC) is concerned, because it creates a new AEC for each iteration.
Why? You can know the reason from common programming languages. For example, the below C# while statement adds/removes local variables from stack for each iteration:
WF WhileActivity has similar logic for each iteration: to create a new AEC based on the template activity (e.g. the SequenceActivity inside the WhileActivity).
Will the new AEC be destroyed at the end of each iteration? You may think "Of course! This is an obvious question". But the answer is "yes or no".
The answer is yes for normal cases without the need for compensation. There will be only one new AEC in memory for the iterations.
But the answer is no when compensation is needed. If a CompensatableActivity is defined inside the WhileActivity and when an exception occurrs in one iteration, all the previous iterations will be compensated. That means all previous AEC's can not be destroyed after each iteration. They have to be in memory for compensation purpose.
According to Krishnan:"The runtime will clean up execution contexts after each WhileActivity iteration. But only if you don't have an ICompensatableActivity inside (if you do, the EC's will stay in memory until the next persistence point.)"
Why? You can know the reason from common programming languages. For example, the below C# while statement adds/removes local variables from stack for each iteration:
while (conditionIsMet)
{
string output = DateTime.Now.ToString();
int count = 0;
......
}
WF WhileActivity has similar logic for each iteration: to create a new AEC based on the template activity (e.g. the SequenceActivity inside the WhileActivity).
Will the new AEC be destroyed at the end of each iteration? You may think "Of course! This is an obvious question". But the answer is "yes or no".
The answer is yes for normal cases without the need for compensation. There will be only one new AEC in memory for the iterations.
But the answer is no when compensation is needed. If a CompensatableActivity is defined inside the WhileActivity and when an exception occurrs in one iteration, all the previous iterations will be compensated. That means all previous AEC's can not be destroyed after each iteration. They have to be in memory for compensation purpose.
According to Krishnan:"The runtime will clean up execution contexts after each WhileActivity iteration. But only if you don't have an ICompensatableActivity inside (if you do, the EC's will stay in memory until the next persistence point.)"
In my previous post, I mentioned XmlSerializer.dll is needed to serialize/deserialize types during web service call. But I did not know a potential memory leak problem until today from an excellent MSDN article: Do not use the overload of XmlSerializer constructor that takes the XML root element name as its second parameter!
As you may know, once a .NET assembly is loaded into memory, it will not be unloaded until the hosting AppDomain is unloaded. XmlSerializer constructor generates a temporary assembly for the type to be serialized using reflection. Because the code generation is expensive, the assembly is cached in memory on a per-type basis.
For example, the following code will create a cached assembly for type Employee:
Whenever an Employee object is to be serialized, the cached assembly will be used.
But sometimes, we may want to change XML root name in the serialized XML message in a web service. An option is to call an overloaded constructor with XML root name as a parameter:
Because the root name parameter is supposed to be dynamic, XmlSerializer will not cache the generated temporary assembly. It will generate a new assembly every time you create a new XmlSerializer with that parameter, and the generated assembly will stay in memory unless AppDomain is unloaded. So more and more generated assembly will stay in memory -- memory is being leaked!
If the root name is static, you can use XmlRootAttribute on the class to change root name of the serialized type; if the root name is dramatically dynamic, there is no easy way to solve the leak problem yet ...
As you may know, once a .NET assembly is loaded into memory, it will not be unloaded until the hosting AppDomain is unloaded. XmlSerializer constructor generates a temporary assembly for the type to be serialized using reflection. Because the code generation is expensive, the assembly is cached in memory on a per-type basis.
For example, the following code will create a cached assembly for type Employee:
XmlSerializer serializer = new XmlSerializer(typeof(Employee));
Whenever an Employee object is to be serialized, the cached assembly will be used.
But sometimes, we may want to change XML root name in the serialized XML message in a web service. An option is to call an overloaded constructor with XML root name as a parameter:
XmlSerializer serializer = new XmlSerializer(typeof(Employee),
new XmlRootAttribute("Manager"));
Because the root name parameter is supposed to be dynamic, XmlSerializer will not cache the generated temporary assembly. It will generate a new assembly every time you create a new XmlSerializer with that parameter, and the generated assembly will stay in memory unless AppDomain is unloaded. So more and more generated assembly will stay in memory -- memory is being leaked!
If the root name is static, you can use XmlRootAttribute on the class to change root name of the serialized type; if the root name is dramatically dynamic, there is no easy way to solve the leak problem yet ...
I believe you saw this warning window for many times when using IE. The reason is the web page of HTTPS URL includes both secure (HTTPS) and nonsecure (HTTP) content. When you view page source, you can see there is image, CSS source, JavaScript src, or other content that begins with "http://", not "https://".
I do not know the reason why IE team still keep this "feature" in IE 7:
1) Why should developer put HTTPS for a common image in web page? A common image should be fetched using HTTP directly because it is also shared by other nonsecure site
2) If I embed google map in a secure site, why should I use HTTPS for google content?
Although developers can write code to map URL from HTTPS to HTTP on web server to solve the problem, although users can change IE security options to enable "Display mixed content", one thing is for sure: This feature of IE is useless and annoying.
Subscribe to:
Posts (Atom)