Thursday, March 31, 2011

strange warning about ExtensionAttribute

I'm getting a strange warning:

The predefined type 'System.Runtime.CompilerServices.ExtensionAttribute' is defined in multiple assemblies in the global alias; using definition from 'c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll'

There is no line number given, so it's hard to figure out what it's on about.

The compiler error code is CS1685

From stackoverflow
  • Are you using someone's dll (or your own) which had implemented this attribute (with exactly the same name) itself as a means of using some c# 3.0 features on pre .Net 3.5 runtimes? (A common trick)

    This is the probable cause. Since it is using the correct one (the MS one in the GAC) this is not a problem though you should hunt down the other and remove it.

  • The compiler does not know which System.Runtime.CompilerServices.ExtensionAttribute

    So it is using the defination from c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll'

    A .dll you are using might have the same extenstion.

  • Expanding on ShuggyCoUk's (correct) answer

    Truthfully it doesn't matter which version of the attribute is used (GAC, 3rd part, etc ...). All that matters is the C#/VB compiler can find some attribute with the correct name. The attribute serves no functional purpose in code. It exists purely to tell the Compiler "hey, this is an extension method".

    You can safely ignore this warning.

In SQL Server, is a sort by short substring more efficient than a sort by the entire, long field?

Consider the following SQL Server 2005/2008 query:

Select [UID], [DESC] From SomeTable Order By [Desc];

If Desc is a fairly long field (Varchar(125) with many entries > 70 chars) and you don't need a strict sorting, would it would be more efficient to do this:

Select [UID], [DESC] From SomeTable Order By Substring([Desc], 0, 20);

The advantage is that all comparisons are pretty short (20 characters, max). The disadvantage is that it incurs the Substring call. For present purposes, assume that you don't want to put an index on this field as this is not a primary key and the above is a fairly rare operation. Which option would you choose?

Note 2: I'm asking mostly out of curiosity here. In my application, Desc is an indexed field and I am not using Substring. However, I briefly considered using Substring and it occurred to me that I didn't truly know which of the above approaches would be more efficient.

Finaly, a bonus question: is it true that using Substring on an Indexed field would make the optimizer skip the index and really slow things down? I don't think the optimizer is smart enough to use the Index if Substring is used (even with a zero base) but I am a bit too busy to test it out right now. However, if you know differently, please correct me!

Update/clarification: you should be assuming that the Desc field is not indexed for purposes of the original question. If it is indexed, the answer is pretty easy.

From stackoverflow
  • I don't think so. Calling the function will cause the most damage to the performance in this case. And yes, functions are very likely to make the optimizer avoid indexes.

  • Your last part is completely true.

    As for the sorting issue whether it's quicker to sort on a substring of the first 20 characters. If the string is say 30 characters the answer is no, if 300 characters then maybe yes. I don't know where the boundary would lie. But it will go through character by character sorting. If 21 characters it's quicker to not have the extra overhead of the substring and let it check the extra 1 character.

    What you could do is have a further column which is a truncated description and sort on this instead.

  • You've said to ignore the fact that [Desc] is indexed, however given that you say that it is indexed and assuming [UID] is the PK and using a Clustered Index, your query is "covered" by the index on [Desc], and thus SQL is going to read the records in index order ... so putting a SUBSTRING will cause it to have to take an extra step to sort by first 20 characters, whereas they were already read in sorted order

    is it true that using Substring on an Indexed field would make the optimizer skip the index and really slow things down

    Generally yes, if the field is in the WHERE clause. Any function applied to a field in the where clause is likely to cause the optimiser to skip the indexes. Generally speaking.

  • Use of a non-clustered index implies an implicit JOIN.

    The index itself does not contain the non-indexed values, it contains only references to the TABLE's blocks.

    To get the non-indexed values, you need to scan over the index and read from these blocks in a nested loop.

    As a rule of thumb, INDEX SCAN WITH TABLE LOOKUP is about 10 times as costly as the TABLE SCAN.

    If you need all the results of an ordered query, especially as a part or a more complex query implying the nested loops, it's sometimes more efficient to perform a TABLE SCAN and sort the results.

    Table needs to be sorted only once and results of the sort will be kept and reused. In this case, SUBSTRING may be more efficient.

    If you need 5% of ordered results or less, then the INDEX SCAN will be more efficient, in this case you need to sort on the whole column.

    Also, index lookup is always more responsive, as you get the first rows faster.

  • Something you might want to consider is this. When sorting strings, assuming good optimized algorithms, you don't have to analyze the entire string to find out which string comes first. Consider the two strings

    F3294r02343232423
    B3920490234324234
    

    You only have to analyze the first character of each before you know that the second string should come first. I'm not sure how much this comes into play with your specific data set, but it's something you should think about.

    As a test, you may want to create a copy of your table with the exact same data and indexes, but truncate the field you are sorting on to 20 characters, and see if there is any noticeable increase in speed due to the smaller amount of data. If there is a significant performance increase, you may want to go with what Robert wrote, and create a second column with the data already truncated so you don't have to use the substring function.

How can I turn off cached requests in Visual Studio load test

When running a load test I want to turn off cached requests.

My web test has the Cache Control setting turned off, which is supposed to mean don't cache. However, it looks like it is caching the images etc; which I don't want.

How can I do this?
Thanks

From stackoverflow
  • This is set by the "Percentage of New Users" setting in the scenario properties. To ensure that there is no caching, set the property to 100. Which indicates that 100% of the simulated users will appear to be coming at the site for the first time.

matching cells in one column with another column in another sheet

I have 2 sheets - one has one column for date and another for the amounts and the other sheet has a column for the date and the amounts plus another column that has description of each amount. *How can match these 2 columns of amounts**? I want a formula taht tells me which cell on the first sheet I can find a certain amount that also exists on the other sheet.

Thanks a lot if someone can help me

From stackoverflow
  • If you are trying to reference a cell in another sheet, you use the following:

    ='SheetName'!F2
    

    where F2 is the cell you want to retrieve the value of.

  • How about ADO?

    Sub ListMatches()
    Dim cn As Object
    Dim rs As Object
    
        'http://support.microsoft.com/kb/246335 '
    
        strFile = Workbooks(1).FullName
        strCon = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & strFile _
            & ";Extended Properties=""Excel 8.0;HDR=Yes;IMEX=1"";"
    
        Set cn = CreateObject("ADODB.Connection")
        Set rs = CreateObject("ADODB.Recordset")
    
        cn.Open strCon
    
        strSQL = "SELECT s2.AcDate, s2.Amount, s1.Description " _
        & "FROM [Sheet2$] s2 INNER JOIN [Sheet1$] s1 " _
        & "ON s2.AcDate=s1.AcDate AND s2.Amount=s1.Amount"
    
        rs.Open strSQL, cn, 3, 3
    
        Worksheets(3).Cells(2, 1).CopyFromRecordset rs
    
    End Sub
    
  • I'm not sure I understand your question, but it sounds like you want to look at the MATCH and/or VLOOKUP worksheet functions. (MATCH can tell you the position of a specific value in a list of values, and VLOOKUP can find a value in a column, and then give you the value from the same row in a different column.)

converting RELAX NG to XSD with trang while keeping external namespace elements (for use with JAXB)

I am using trang to convert a RELAX NG .rng file to an XML Schema .xsd file for use with JAXB. Is there a way to put attributes/elements in the .rng file and have them show up in the .xsd file? I have an external .xjb file that I use for JAXB settings, but there are some JAXB settings that are very closely coupled to the schema itself and it seems like it makes more sense to (somehow) put them in the .rng file rather than the .xjb file.

Any advice?

From stackoverflow
  • My opinion is that what you're doing now is the best way and you should keep your JAXB customizations separate from your RELAX NG schema. JAXB customizations in an XML Schema are ugly at best, distracting and confusing at worse. With RELAX NG, there's much less of a mapping and my guess is that you'll still need to put some of the customizations in a separate JAXB customization file, which means that your customizations will be in two different files.

axis wsdl generation

Hi, I'm using Axis to model a sample WebService. What I'm doing now is trying to understand which are the limitations of the automated wsdl and code generation.

Now for some server side code:

this is the skeleton of the sample web service:

public class TestWebService {
  public AbstractAttribute[] testCall( AbstractAttribute someAttribute ) {
    ....

and my data classes: public abstract class AbstractAttribute { String name;

  /*get/set for name*/
  public abstract T getValue();
  public abstract void setValue(T value);
}

public class IntAttribute extends AbstractAttribute<Integer> {
  Integer value;
  public Integer getValue(){ return value; }
  public void setValue(Integer value){ this.value = value; }
}

public class StringAttribute extends AbstractAttribute<String> {
  String value;
  /* ok, you got the point, get/set for value field */
}

The eclipse tool for Axis2 is quite happy to generate a wsdl from these sources, including the schema for the attribute classes, which is:

<xs:complexType name="AbstractAttribute">
    <xs:sequence>
        <xs:element minOccurs="0" name="name" nillable="true" type="xs:string"/>
        <xs:element minOccurs="0" name="value" nillable="true" type="xs:anyType"/>
    </xs:sequence>
</xs:complexType>
<xs:complexType name="IntAttribute">
    <xs:complexContent>
        <xs:extension base="xsd:AbstractAttribute">
            <xs:sequence>
                <xs:element minOccurs="0" name="value" nillable="true" type="xs:int"/>
            </xs:sequence>
        </xs:extension>
    </xs:complexContent>
</xs:complexType>
<xs:complexType name="StringAttribute">
    <xs:complexContent>
        <xs:extension base="xsd:AbstractAttribute">
            <xs:sequence>
                <xs:element minOccurs="0" name="value" nillable="true" type="xs:string"/>
            </xs:sequence>
        </xs:extension>
    </xs:complexContent>
</xs:complexType>

now, if see something strange here, AbstractAttribute hasn't the ** abstract="true" ** attribute, and define an anyType value element, which get rewrite in IntAttribute and StirngAttribute. I don't even know if this is a legal schema (I don't think it's legal, by the way).

More, if I try to generate a client from this wsdl (always using the eclipse tools) the generated sources won't compile, because AbstractAttribute defines an

Object localValue;

field and Int/String Attribute defines

int localValue;

and

String localValue;

..I tried to "accomodate" the sources (without many hopes, obviously), and the results are that the server try to instantiate an AbstractAttribute instance (throwing an InstantiationException).

So my question is: there is a way to model something like the data model above, or web services and XML schemas' in general are not the best tools to use for this particular case?

From stackoverflow
  • To explain the problem you are encountering, it helps to think of what Axis needs to do when your service is called.

    Axis is simply a java web-application...when it receives a request for a service, it will look up the mapping that you've defined for it. If it finds a mapping, it tries to create an instance of the necessary classes you've defined to service the request.

    If you've defined the classes as abstract or as interfaces then you'll get InstantiationExceptions since these types can't be created. When Axis tried to create the wsdl, it won't be able to figure out what type to put so it will use "anyType."

    To answer your question: you CAN use the the model you have above in your code, but you won't be able to use these classes with Axis. What we have typically done in our projects is:

    1. Define the classes we need, as we would in a typical Object-Oriented application
    2. Define "transport-only" classes that are used for web services. These classes are composed of simple types and can be easily created. They are only used for exchanging web-service messages. We use these classes with Axis.
    3. Find some way for these two types of classes to easily share/exchange information. You can have interfaces that are shared by both (but Axis doesn't know about) or even use BeanUtils.copyProperites to keep two different objects in sync.

    Hope that answers your question.

How can I update a win32 app gui while it is waiting for another program to finish?

I am currently using CreateProcess/WaitForSingleObject from within a win32 GUI app to launch a small GUI app that deals with software licensing issues. This all works fine, but it essentially hangs the "parent" app while it waits for the licensing app to finish its work. During this time, updates to the parent app do not occur and it ends up with ugly white squares if the utility app window is moved.

Also, for some strange reason, while the utility app is running, if I copy something from within that app to the clipboard, it HANGS. I haven't figured out why yet, but it only happens if I am waiting for the app to finish from within the parent app.

So I'm thinking that if I can cause the parent app to handle its events while waiting for my other app to finish, it might solve both problems.

So, is there a replacement for CreateProcess/WaitForSingleObject that also handles UI updates?

From stackoverflow
  • I suggest you can handle this as follows:

    • Parent application does CreateProcess, and then returns immediately instead of waiting for a response or for the utility app to finish
    • Because the parent applicatin has returned, it can handle other Window messages (e.g. WM_PAINT)
    • When the utility app finishes, it notifies the parent application (e.g. using PostMessage and RegisterWindowMessage APIs)
    • Parent application handles positive notification received via PostMessage
    • Parent application may also have a Windows timer (WM_TIMER) running, so that it knows if the utility app is killed before it send its notification
  • You can get a hanging problem if the app you are spawning causes a sendmessage broadcast, either explicit or implicit. This is clipped from my website:

    The problem arises because your application has a window but isn't pumping messages. If the spawned application invokes SendMessage with one of the broadcast targets (HWND_BROADCAST or HWND_TOPMOST), then the SendMessage won't return to the new application until all applications have handled the message - but your app can't handle the message because it isn't pumping messages.... so the new app locks up, so your wait never succeeds.... DEADLOCK.

    I don't do clipboard code, but if that causes the situation above (believeable), then you'll deadlock. You can:

    • put the launch of the secondary application into a little thread
    • use a timeout and spin around a PeekMessage loop (yuck)
    • use the MsgWaitForMultipleObjects API.

    No preference is implied by that ordering... I'm assuming you don't create the spawned application yourself, in which case you could use IPC to get around this issue, as ChrisW suggested.

  • You should create a thread that does only the following:

    • call CreateProcess() to run the other app

    • call WaitForSingleObject() to wait for the process to finish - since this is a background thread your app will not block

    • call CloseHandle() on the process handle

    • call PostMessage() with a notification message for your main thread

    Now you only need to make sure that your main application has its GUI disabled to prevent reentrancy problems, possible by showing a modal dialog that informs the user that the other app is running and needs to be dealt with first. Make sure that the dialog can not be closed manually, and that it closes itself when it receives the posted notification message from the background thread. You can put the whole thread creation into this dialog as well, and wrap everything in a single function that creates, shows and destroys the dialog and returns the result your external application produces.

  • You could put the WaitForSingleObject call in a loop and use a relatively small value for the dwMilliseconds parameter.

    The condition to exit the loop is when the WaitForSingleObject call returns WAIT_OBJECT_0.

    In the loop you have to examine the message queue to see if there are any that you must process. How you handle this is really up to you an it depends on your typical needs.

    // Assuming hYourWaitHandle is the handle that you're waiting on
    //   and hwnd is your window's handle, msg is a MSG variable and
    //   result is a DWORD variable
    //
    
    // Give the process 33 ms (you can use a different value here depending on 
    //  how responsive you wish your app to be)
    while((result = WaitForSingleObject(hYourWaitHAndle, 33)) == WAIT_TIMEOUT)
    { 
       // if after 33 ms the object's handle is not signaled..       
    
       // we examine the message queue and if ther eare any waiting..
       //  Note:  see PeekMessage documentation for details on how to limit
       //         the types of messages to look for
       while(PeekMessage(&msg, hwnd,  0, 0, PM_NOREMOVE))
       {
         // we process them..
         if(GetMessage(&msg, NULL, 0, 0) > 0)
         {
           TranslateMessage(&msg);
           DispatchMessage(&msg);
         }
       }
    } 
    // if you're here it means WaitForSingleObject returned WAIT_OBJECT_0, so you're done
    //  (but you should always check and make sure it really is WAIT_OBJECT_0)
    if(result != WAIT_OBJECT_0)
    {
        // This should not be.. so react!
    }
    

    jussij : This is the simplest way to keep the application responsive.
  • Your parent process appears to hang because the WaitForSingleObject() call blocks your thread until the handle you pass into the call is signaled.

    Your child process likely hangs during the copy-to-clipboard operation because it is, as a part of that operation, sending a message either specifically to the parent process's window or to all top-level windows. The message loop in your parent process's thread is not running, because it is blocked waiting until the child process exits, so the message is never processed and the child process remains blocked.

    Instead of calling WaitForSingleObject(), you can call MsgWaitForMultipleObjects(). If you specifiy QS_ALLINPUT for the dwWaitMask parameter, MsgWaitForMultipleObjects will return either when your event is signaled or when there is input in the thread's message queue. If MsgWaitForMultipleObjects() returned because a message is available, you can process it and resume waiting:

    MSG msg;
    DWORD reason = WAIT_TIMEOUT;
    while (WAIT_OBJECT_0 != reason) {
        reason = MsgWaitForMultipleObjects(1, &hChildProcess, FALSE, INFINITE, QS_ALLINPUT);
        switch (reason) {
        case WAIT_OBJECT_0:
            // Your child process is finished.
            break;
        case (WAIT_OBJECT_0 + 1):
            // A message is available in the message queue.
            if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
                TranslateMessage(&msg);
                DispatchMessage(&msg);
                // Note that if your main message loop does additional processing
                // (such as calling IsDialogMessage() for modeless dialogs)
                // you will want to do those things here, too.
            }
            break;
        }
    }
    

load a dynamic library from elisp

I need to use some native code and I would like to try elisp as a scripting language. Is it possible to call native functions which are implemented in dynamic library (dll in windows)? Or the only possibility to do it is to implement native code, wrap it with DEFUN (...) macros and recompile emacs?

From stackoverflow
  • You can not dynamically load a library with Emacs Lisp code. To work around this people usually call out to an external program (process) and control it by sending commands to it and reading its output.

    You can look at slime, tramp, gnuserv, movemail and some other similar hacks for examples.

  • This feature exists only in SXEmacs, that doesn't has support for Windows, but in Unixes it works fine

  • GNU Emacs 23 has D-BUS support. While primarily a Unix thing, D-BUS is supported on Windows and provides an IPC system for user-level applications.

    You could probably build what you want with that.

MOSS'07 - Page View Web Part Slows Menu Hovers

Hello everyone,

In our MOSS '07 site we have a page that contains just a Page Viewer web part in it that points to a site on another server. However, I've noticed that on that page (and any others that have a Page Viewer web part on it) our drop down menus and hover effects are SUPER SLOW and completely max out the CPU on the visitor's computer (process is IEXPLORER.)

Through testing, I was able to determine that it doesn't matter what URL the web part is pointed to...just having the IFRAME on the page seems to cause it (just setting the viewer to load Google's homepage--which is probably the simplest site I know--still causes the problem). If I go and remove the web part, the menus start functioning just fine again.

I attached a debugger to the process and stepped through the Menu_HoverStatic and called functions and it seems to have a hard time when assigning panel.scrollTop to zero in the PopOut_Show function.

Has anyone else noticed this? ...perhaps found a solution to it? I can't find where to edit PopOut_Show function on our server (I think it's a resource in one of the .NET DLLs) or else I'd just comment out that line as I don't think it's really important anyway...at least on our site.

I really like the ability to have web pages from another server hosted in our SharePoint site, but the performance on the hovers is AGONIZING...and, honestly, unacceptible. Depending on the resources of the user's computer, the hover effects can take 15 seconds to complete at times!!!!

Any suggestions would be really appreciated!

Thanks, -Dan

From stackoverflow
  • SharePoint's built-in JavaScript is probably making the browser wait until the IFrame within the Page Viewer Web Part has completely loaded. If you can see a status bar message that says "Please wait while scripts are loaded..." when you attempt to click on the page then that's definitely the problem.

  • Thank you for your reply. I was actually able to discover what the problem was (my appologies for not sharing it here with everyone when I did!)

    The problem wasn't so much from having the IFRAME on the page, it was because I had set the zone to be 100% width and height. Because of a but in IE, trying to calculate the location of the dropdown was erroring (I don't remember what javascript function or call was exactly to blame, but I remember stepping through it with the debugger.) I believe it had something to do with "location offset" or something like that. My take at the time was that it was trying to position the dropdown menu on the screen, and the calculation for positioning it was failing.

    To get around it, I had to set a javascript routine to programmatically set the height of the zone after the page loaded. Exactly setting the height prevented the dropdown problem in the menus. Of course, it wasn't ideal because if a user resizes the window, the IFRAME (or, more precisely, the zone it's in) doesn't resize with it. But, it was a suitable band-aid for the problem.

    I'm hoping that IE 8 will fix this when it's released.

    Thanks again for the response! -Dan

how to have make targets for separate debug and release build directories?

Hi all,

I am looking for suggestions to properly handle separate debug and release build subdirectories, in a recursive makefile system that uses the $(SUBDIRS) target as documented in the gnumake manual to apply make targets to (source code) subdirectories.

Specifically, I'm interested in possible strategies to implement targets like 'all', 'clean', 'realclean' etc. that either assume one of the trees or should work on both trees are causing a problem.

Our current makefiles use a COMPILETYPE variable that gets set to Debug (default) or Release (the 'release' target), which properly does the builds, but cleaning up and make all only work on the default Debug tree. Passing down the COMPILETYPE variable gets clumsy, because whether and how to do this depends on the value of the actual target.

From stackoverflow
  • One option is to have specific targets in the subdirectories for each build type. So if you do a "make all" at the top level, it looks at COMPILETYPE and invokes "make all-debug" or "make all-release" as appropriate.

    Alternatively, you could set a COMPILETYPE environment variable at the top level, and have each sub-Makefile deal with it.

    The real solution is to not do a recursive make, but to include makefiles in subdirectories in the top level file. This will let you easily build in a different directory than the source lives in, so you can have build_debug and build_release directories. It also allows parallel make to work (make -j). See Recursive Make Considered Harmful for a full explanation.

  • If you are disciplined in your Makefiles about the use of your $(COMPILETYPE) variable to reference the appropriate build directory in all your rules, from rules that generate object files, to rules for clean/dist/etc, you should be fine.

    In one project I've worked on, we had a $(BUILD) variable that was set to (the equivalent of) build-(COMPILETYPE) which made rules a little easier since all the rules could just refer to $(BUILD), e.g., clean would rm -rf $(BUILD).

    As long as you are using $(MAKE) to invoke sub-makes (and using GNU make), you can automatically exporting the COMPILETYPE variable to all sub-makes without doing anything special. For more information, see the relevant section of the GNU make manual.

    Some other options:

    • Force a re-build when compiler flags change, by adding a dependency for all objects on a meta-file that tracks the last used set of compiler flags. See, for example, how Git manages object files.
    • If you are using autoconf/automake, you can easily use a separate build out-of-place build directory for your different build types. e.g., cd /scratch/build/$COMPILETYPE && $srcdir/configure --mode=$COMPILETYPE && make which would take the build-type out of the Makefiles and into configure (where you'd have to add some support for specifying your desired build flags based on the value of --mode in your configure.ac)

    If you give some more concrete examples of your actual rules, maybe you will get some more concrete suggestions.

java swing : custom everything - subclass jcomponent or jpanel or ... ?

Hiya - quick one - is there any harm / value in subclassing JComponent as compared to JPanel ?

To me they pretty much look to be the same thing if I'm doing my own drawing & the object won't have any children, however there seems to be a pref for subclassing JPanel over JComponent - just looking for opinions on why this might be ...

Thx :-)

From stackoverflow
  • If you are drawing the whole component yourself, then use JComponent. JPanel is just a simple concrete instance of JComponent (which is abstract), and is not really meant to have its methods overridden.

    JPanel is sometimes subclassed so that the subclass constructor can add various controls/layout rather than having to do it via some method call.

    Tom Hawtin - tackline : Many people thing JPanel adds setOpaque(false), but that actually depends upon which look and feel is being used.

WCF REST Starter Kit - support for multiple resources?

I just started tinkering with the WCF REST Starter Kit, I watched the screencasts and... I suspect I'm just being stupid. :) Out of the box (with the provided templates) I can create REST services for a singleton resource or a collection of a resource. But what about support for different kinds of resources in the same project? In example if I have Books, I also want to have Authors and Publishers. With the provided templates I don't see an easy way to accomplish this. Creating a service (and thus a VS project) for each resource sounds ridiculous to me. I need to support multiple resource types in a single service so that they can be accessed under a similar URL, so that the user has only to replace the last part, like http://service/books to get all books and http://service/authors/32 to get a particular author.

Can anyone shed some light on this? I know this can be created with a plain WCF service, but the Starter Kit has all the boilerplate in place already, so why not use it? How would one approach a template generated project to add support for different resource types?

Thanks.

From stackoverflow
  • I think your overthinking the WCF REST Starter Kit. Try to think of the WCF REST Starter Kit as just a WCF Service that's configured for easy setup in the http environment. The default templates that are setup for the WCF REST Starter Kit are meant to be used as a sample. You will have to just create your own signature or adapt theirs to meet your needs.

    The key parts that you'll want to focus on are the code in the .svc file (you can't access it double clicking, you'll have to choose open with) and the [ServiceContract] interfaces.

    Modify the [ServiceContract] interface in the code behind provided to look just like it would for a regular WCF Service.

    Here is a sample of an ATOM Pub Feed modified for your needs

    [ServiceBehavior(IncludeExceptionDetailInFaults = true, InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Single)]
    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    [ServiceContract]
    public partial class LibraryFeed
    {
        public LibraryFeed()
        {
        }
    
        /// <summary>
        /// Returns an Atom feed.
        /// </summary>
        /// <returns>Atom feed in response to a HTTP GET request at URLs conforming to the URI template of the WebGetAttribute.</returns>
        [WebHelp(Comment = "Gets a list of Books.")]
        [WebGet(UriTemplate = "/books/?numItems={numItems}")]
        [OperationContract]
        public Atom10FeedFormatter GetBooks(int numItems)
        {
            var books = GetBooks();
            List<SyndicationItem> items = GetItems(books, numItems);
    
            SyndicationFeed feed = CreateFeed(items);
    
            WebOperationContext.Current.OutgoingResponse.ContentType = ContentTypes.Atom;
            return feed.GetAtom10Formatter();
        }
    
        /// <summary>
        /// Returns an Atom feed.
        /// </summary>
        /// <returns>Atom feed in response to a HTTP GET request at URLs conforming to the URI template of the WebGetAttribute.</returns>
        [WebHelp(Comment = "Gets a single author.")]
        [WebGet(UriTemplate = "/authors/?numItems={numItems}")]
        [OperationContract]
        public Atom10FeedFormatter GetAuthors(int numItems)
        {
            var authors = GetAuthors();
            List<SyndicationItem> items = GetItems(authors, numItems);
    
            SyndicationFeed feed = CreateFeed(items);
    
            WebOperationContext.Current.OutgoingResponse.ContentType = ContentTypes.Atom;
            return feed.GetAtom10Formatter();
        }
    
        /// <summary>
        /// Returns an Atom feed.
        /// </summary>
        /// <returns>Atom feed in response to a HTTP GET request at URLs conforming to the URI template of the WebGetAttribute.</returns>
        [WebHelp(Comment = "Gets a list of Authors.")]
        [WebGet(UriTemplate = "/authors/{id}")]
        [OperationContract]
        public Atom10FeedFormatter GetAuthor(string id)
        {
            var author = GetSingleAuthor(id);
            WebOperationContext.Current.OutgoingResponse.ContentType = ContentTypes.Atom;
            return GetItem(author.GetAtom10Formatter());
        }
    }
    
    Pawel Krakowiak : I figured that out eventually. I was trying to use the template projects as the base of my development, not merely samples, which they are. I just created my own WCF Service and am reusing the extensions library from the Starter Kit.

What database development tool would you recommend to Eclipse Java developer?

I am an experienced Java developer, used to all the nice features that Eclipse provides for Java development (in particular Ctrl+T to open type, Ctrl+Click or F3 to open referenced declaration, outline, ...). I really like this degree of comfort.

Now I have a project, where I need to do some changes in the existing database (stored procedures, etc.) and I am looking for a tool, which would give me the similar level of comfort. (The database is Sybase) In particular:

  1. Edit/Modify the stored procedures
  2. Have some syntax highlighting for stored procedures/sql
  3. Some code completion (i.e. Table/View/Procedure/variable names)
  4. Have an easy way to find a procedure (something like Ctrl+T), I don't like scrolling through the list of 100s of procedures in my DB
  5. Have an easy way to navigate to referenced object (ie. from exec ProcName to the ProcName definition, or table, etc.)
  6. Highlight/List the variable usages inside the procedure body

I have tried several free tools I found around (Eclipse Database Development, Oracle SQL Developer, some old version of Aqua Data Studio), but none of them seems to get even close to points 4. and 5., they cover (to some extent) just 1, 2, 3.

I am looking for tool recommendations, eventually hints, how to workaround the missing features (i.e. how to change my habits).

EDIT: Oracle SQL Developer is not limited to Oracle, but can be used for other databases as well (just needs JDBC diriver). See bottom of this page for detailed instructions.

Thanks

From stackoverflow
  • SQL Navigator has features 1 to 5. I don't know about 6.

    But it is not free. Quite expensive actually, with the base edition at $870 and the Xpert edition at $2275.

  • also checkout pl sql developer http://www.allroundautomations.com/plsqldev.html but it is also not free

    Michal : Unfortunately this PL/SQL Developer seems to support only Oracle, so it's useless for Sybase development. Oterwise it looks nice.
  • Finally, I settled down to Sybase Workspace, which is product from Sybase, built on top of Eclipse Database Development. It adds some features missing in Eclipse Database Development (i.e. nice stored procedure editor, with more completition, F3 to jump to declaration of variable or stored procedure, debugging, outline etc.). Formally, it has 1, 2, 3, 5. It does not have 4 and 6, but it has the feel of Eclipse, I am used to, so I feel quite comfortable with it.

    The main advantage is that Sybase gives the license free for anyone, who has the Sybase database server license (but it does not seem they check it anyhow).

Default icons for Windows applications?

I've just recently begun coding desktop apps in c# and I'm wondering if Visual Studio or MSDN provides a default set of icons for applications, i.e. icons for common actions, GUI elements, data types, etc. For instance, I have a button that the user clicks on to select a file, it would be nice if I could use Windows' standard folder icon.

From stackoverflow
  • You should be able to find the icons in a zip file named "VS2008ImageLibrary.zip" located here:

    %Program Files%\Microsoft Visual Studio 9.0\Common7\VS2008ImageLibrary\1033

    BeefTurkey : This was exactly what i was looking for. Thank you!
  • There is an image library packaged with Visual Studio. Do a search for icon files in the Visual Studio install directory to reveal its location. It may or may not be there, or will be in different locations, depending on the version of VS you have installed.

    If memory serves the Express editions come without this feature, but all other versions since at least Visual Studio .NET do.

  • check out http://www.famfamfam.com/lab/icons/ very nice and exhaustive icon sets. as far as I can tell they are free.

Why _not_ use html tables for layout

I have been pondering on this over the last couple of days. I'm currently writing a web application (actually adding a screen to an existing, extensive application). I know that using "tables" for layout is a big no-no. But in this app, all of the pages use tables for layout. I decided to do my new page using divs instead. But I came across 2 hurdles.

  1. A lot of the layout stuff that I need to do were already done on the other pages and if I use divs instead, I cant reuse any of that (CSS's and JS and that kinda stuff). That leads to more work and pushes out my completion date.

  2. Some of the more complex parts of the layout are really difficult to do using divs and are really easy to do using tables. Maybe its just me being "old-school"y as I am not much of a web/html kinda guy (I get by).

So, why can't I use tables? I'm not trying to be argumentative, just want to see if there are any compelling arguments beyond the "Its the right thing to do" type stuff. I know it is the right thing to do. I'd like to know why?

From stackoverflow
  • Doing css layouts make it easier to change the layout of your page later on. Tables also make your html harder to read and edit.

    According to this page..

    • make your redesigns more efficient and less expensive
    • help you maintain visual consistency throughout your sites
    • get you better search engine results when you minimize your markup and use header tags properly. This reduces ratio of code to content.
    • make your sites more accessible to all viewers and user agents.
    • your pages load faster, file sizes will be smaller because table information isn't needed in every page.
    • and give you a competitive edge (that is, job security) as more of the world moves to using Web standards.
    fluffels : +1 for readability
    Ian Suttle : Lowers your hosting costs? why, because of b/w? Better search engine results? Why, because you can put relevant content up top then position elsewhere via CSS? more accessible? More info on how these things apply would be really useful.
    Simucal : @Ian Suttle, because you can extract common elements of you design and reduce the page size of each page of your website. When serving up these pages thousands of times, that can save a lot of bandwidth.
    Ian Suttle : The only thing you reduce without using tables is the difference between
    tags and a
    tag for instance. Then again I see most
    tags with more nested
    tags. Not sure this is a point valuable enough to heavily influence a decision.
    Bolt_Head : I changed it up a little, images are will are surely the most influential aspect of page loading time. But that isn't to say that we should neglect everything else.
  • Tables also affect the behaviour of search engines. Using CSS instead of tables is actually an SEO technique as well. The reason I heard for this is that search engines stop indexing after a certain level of tags, and layout using tables can become very deeply nested.

    Matthew Scharley : Design with tables doesn't need to become nested, especially if you have decent a visual designer (like Dreamweaver) which can easily handle the rowspan and colspans nicely. Nested tables are mostly to keep the code clean and seperate. Oops, sounds like divs to me (just with alot more code).
    fluffels : @monoxide: Doesn't need to become nested, but they do tend to, in my experience. Note that I said "*can* become very deeply nested."
  • CSS layouts make it easier to alter the layout through stylesheets (separation of data from display).

    Also, browsers often can't render a table until the </table> tag has been parsed. This can be a problem if you are using a table with a large amount of content. CSS layouts can often be rendered as they go.

  • If you have the kind of constraints you mentioned, deadline looming, large existing code base, then use tables, the universe won't implode, but in the long term using css and welformed markup will allow you to create a nicer, cleaner, more maintainable website.

    annakata : The pragmatic approach is often the correct one, but saving time now is kind of a false economy when it comes to de-tabling later, beware of that.
  • Implementing your layout with divs, spans, etc, offers a ton more flexibility over tables. Consider floating, auto wrapping of block content vs. horizontal scrolling, specifying where an element should exist on a page, etc. Content is just plain easier to manipulate via CSS when not using tables.

    When using tables you're pretty locked in to a strict structure.

    Now that doesn't mean it's absolutely the wrong thing to do. From your information I'd likely stick with the theme of the application for consistency sake and implement using tables. Make the best choice for the situation vs. following "the rules" on what's popular right now.

    Hope that helps! Ian

  • Using CSS for layout, rather than HTML tables provides a level of separation between your page content and the layout of that content.

    With tables, your HTML markup has both your content and layout intermingled within the HTML code. This means is you ever want to have a different layout for the same content, you'll need to edit the intermingled code heavily.

    By using a CSS stylesheet over your HTML markup you separate the code that provides the content (HTML) from the code that provides the layout (CSS). This makes it far easier to completely change the layout aspect of a web page without having to edit the content code.

    The best example of this is the CSS Zen Garden website.

    All that said, it's still far easier to do some layout techniques in tables rather than CSS, so there's definitely a "balancing act" there. (For example, even Google uses tables in it's page layouts!)

  • They are also bad for accessibility reasons i.e. screen readers don't read them correctly, but this is only if the data you are representing is not tabular data.

    Saying that, due to deadlines etc. (the real world) I've resorted to using them for form layout because it's just so much easier.

    Neil Aitken : I used to struggle with CSS forms as well, there are some excellent techniques out there for getting them to behave, I havent regretted taking some time out to build a generic form stylesheet, which I can apply to most situations.
    annakata : +1 this is the big deal
    Ian Devlin : Agree with Neil, I've had the same issues in the past, but after taking some time to look into it correctly, I too have sorted forms with CSS.
    marktucks : I have also sorted *simple* forms with CSS, but if there's anything that's not the norm, or is just going to take some fiddling, then it's just easier and takes a lot less time, leaving more time to ficus on the actual functionality and the fun stuff.
  • By using tables for layout you tie the content to the formatting, this is the main point that most of the disadvantages come from. It messes up the source, complicates site-wide formatting updates that could be easily done with CSS, etc. Using tables for layout also often means slicing images on weird places just to fit them in the cells, yuck.

  • Tables are pretty ugly if you want to support text to speech readers (if you've got any kind of accessibility requirements, you pretty much have to use tables for tables and nothing else).

    There is an SEO aspect of this as well. For SEO, it's better to have your content at the top of the page than at the bottom. (Sadly, studies suggest appears that putting a menu first is actually a good idea for accessibility, so the two imperatives clash.)

    If you are having trouble with cross-browser support, I can suggest that you develop using Firefox. Its support for CSS development is much better than other browsers, and it's more standards-compliant (and hence more predictable). Later on, you can patch up for down-level browsers.

    And, as Peter says, the universe won't implode. But using CSS for layout is cleaner in the long term. Redesigning something with the layout hard-coded can be a real pain. Check out the CSS Zen Garden for examples of what you can do. Of course, in the future, we'll have CSS table layouts, the best of both worlds.

  • Because the tag <table> implies it should contain tabular data.

    Like Peter said, the universe won't implode if you do create table-based layout, but semantically, this should be avoided.

    Semantics become increasingly important, since web pages are not always shown in a desktop browser anymore. Web standards are developing in a way that HTML describes the semantic structure of the document, not the markup; and as said: using <table> tags for tabular data is semantically correct.

    Interesting related reads:

    Adnan : That thread had a lot of points that resonated with me. We don't use tables for all layout, just for the macro grid. All other "formatting" is done using CSS/divs within the tds. I played with floats/positions and I spent an hour just getting a basic 4 column form style page to look barely decent.
  • you can use tables, but divs have more advantages. but it's all in perspective. Like you say, you are precious for time, so probably (on short notice) tables will be the choice to make. However, if you manage to make it div-wise, you will have a more maintable page, which you can use to refactor the other pages as well.

  • First, you can usually convert a table layout to divs without too much trouble once you know the necessary CSS.

    But more to the point, some of the reasons why tables are bad:

    • Not all your users are going to see the page using the browser's rendering engine. Some are going to use a screen reader, which may assume that when it encounters a table, it should actually read the contents out as tabular data. Essentially, you should think of it as a coincidence that tables have a certain layout "look" in a browser.
    • tables can't be rendered until the table and all its children have been parsed. A div can be rendered independently of its children, so you avoid the page contents jumping up and down on the page as the page is parsed, and get to show contents sooner.

    There are good and valid reasons to prefer tables, but when you're used to tables, there's also a certain learning curve, so if you're pressed for time, this might not be the time to make that switch.

  • Jeffrey Zeldman wrote a great book on this topic: Designing With Web Standards. This book should be mandatory reading for every web designer.

    Here are some reasons why tables for design are bad

    • Tables generate more markup -> More bandwidth usage
    • Tables makes it harder for search engines to index your pages -> Your site becomes less "searchable"
    • Tables makes it harder to change and tweak the visible appearance of your site -> More costly redesigns
    • ...But most importantly: using tables for design adds presentation logic to your markup

    That is bad because you want to separate your presentation from your content! HTML should only define WHAT your content is, not HOW it should look.

    If you obtain this level of separation your site will...

    • ...be more accessible by different kinds of browsers and other kinds of user agents.
    • ...make redesigns much easier
  • The best you do not listen to the fanatics of CSS. These purists live in some imaginary world and will feed you the arguments of some abstract purity, semantics and the rest. You would not want to have this kind of people in your team. For now I've only seen yellow-mouth students to run around with "pure CSS design" ideas. They'll show you examples of very primitive CSS-designed sites, ignoring you whenever you ask them how you can accomplish some complex enterprise software design with it. Experienced developers already know this perfection is not possible.

    What should you do? Follow the KISS principle. If CSS can help you implement your particular design idea, fine, the problem is solved. If you need to resort to some relatively uncomplex hack, it may be okay. But when it comes to some huge piece of code with dozens of rules to make a basic thing, which you easily and naturally achieve with tables, drop it.

    It serves no purpose to create an incredibly complex and sophisticated tricky CSS design that noone else (and yourself after a couple of months) will be able to understand and support.

    As for the "easy redesign", unless you only want to change colors and fonts, you will enevitably have to restructure your markup whether it is done with tables or without them. In the latter case, a high portion of your markup will serve no purpose except implement CSS trickery which will be useless for you new redesign idea.

    Honestly, it's not your fault that the people responsible for further development of HTML/CSS don't do their job properly. You should not waste your and other people's time trying to find workarounds for something that should have been there ages ago.

    HTML/CSS is a good example of how poorly some committee could do their job and brake the development of technology as compared to a single company with resources and commitment.

    annakata : I don't even know what "yellow-mouth" means, but you're way off the res here. It is every developers job in every field of development to ensure maintainability.
    User : It means "unexperienced" und willingly ignorant of this.
    annakata : Interesting. Google just says it's some sort of a fish and a disease - where does this expression originate?
    User : Sorry, I should have probably not used this expression, it could well be culture-specific. I remember some birds have ther youngs with their beak inners of the yellowish color until they mature a little bit. :)
  • I'm going to post a contrarian view, just so you have some ammo. I realize this doesn't answer your question (why not use tables) but it does address the other responses you will see.

    So be practical.

    That said, most of my designs use pure CSS. Nearly everything can be done in CSS just as easily as with tables (or even easier.) Start with the assumption that DIVs can do it, and maybe fall back on 1 table for a tricky multi-column layout. Even then, someone has probably found a solution for your layout so search first.

    annakata : Your second link is based on misconceptions, the third is a lemming argument, and your point about display:table is very useful but is an argument in favour of CSS, not tables. The whole display:table thing solves *everything* if only we can speed it along.
    Steve Eisner : Isn't that exactly the point? I totally agree with you about display:table, but we're building HTML now, not next year. (I happen to disagree with your unsubstantiated terms "misconceptions" and "lemming" but understand why you might feel that way)
    annakata : It is the point, but it's a point for CSS, not for table elements. It's too big to cover here (and belongs on that blog anyway) but justifying the use of tables on the grounds that google does is outright giving up. That's no answer at all. And the "I'll write an API if I want to" is quite ignorant.
  • "The right thing to do" depends on your situation.

    Yes, divs offer more flexibility, but at the end of the day, do you really need it? As far as the number of lines of code goes, I have found that a simple table is often quite a bit smaller that an equivalent div design.

    It's faster to put together, and it works as reliably as possible in pretty much all browsers, which means you don't have to write a zillion hacks to make it work consistently AND it doesn't break down when a new display engine comes out.

    Using tables for display is technically a hack, but it does the job, is easy to understand and maintain and is to a large degree future proof, so for a simple layout, considering that the rest of the website already uses tables for layout, I would keep using tables.

    Besides, turning a Table into a "div" table which uses the new "display=table" attributes can easily be done with a few regexes (if you rack your brains, you can probably even do it with a single pass). If you know your regexes, you can replace a table with divs through an entire website in a matter of minutes, so the whole issue of flexibility is really not that big: when (if) you actually need to use divs and CSS positioning, just run a few regex, and you are set. And if you need more control than what you can handle from regex, you can spend a couple hours to write a quick & dirty parser that will convert your tables to div exactly as you want them.

    Bottom line is that despite the stigma, for simple layouts, tables still do the job, they do it fast and are reliable, and if you don't have to contend with CSS purists, you can save yourself a lot of work.

  • Convention for Javascript Domain Model Objects

    Hi all,

    If I have to create a domain model object in C# I might do something like this:

    public class Person
    {
        Public string Name { get; set; }
        Public string Gender { get; set; }
        Public int Age { get; set; }
    }
    

    In Javascript, what is the convention for defining such objects? I'm thinking something like this:

    NAMESPACE.Person = function() {
        this.name = "";
        this.gender = "";
        this.age = 0;
    }
    
    From stackoverflow
    • I don't think there is much of a "convention" but you can declare "private" variables and Crockford shows you how to here

      
      function Constructor(...) { var that =
      this; var membername = value; function
      membername(...) {...}
      
      }
      

      Note: The function statement

      
      function membername(...) {...}
      

      is shorthand for

      
      var membername = function
      membername(...) {...};
      

      To round out the answer, you would do this, you'll notice I do it a little differently

      
      // class
      function Person()
      {
          // private variable
          var name = "Default Value";
      
          // getter
          this.getName = function()
          {
           return name;
          }
      
          // setter
          this.setName = function(s)
          {
           name = s;
          }
      }
      
      
      // instanciate an object
      var p = new Person();
      
      // alerts: Default Value
      alert(p.getName());
      
      p.setName("def");
      
      // alerts: def
      alert(p.getName());
      
      // alerts: undefined
      alert(p.name);
      
    • Yeah, spot on basically. The only thing to add is that you should prototype methods on. How you proto can be situational to the design, I generally prefer to do this internally to the class, but flagged so it only runs when required, and only the once.

      NAMESPACE.Person = function() {
          this.name = "";
          this.gender = "";
          this.age = 0;
      
          if(!NAMESPACE.Person._prototyped)
          {
              NAMESPACE.Person.prototype.MethodA = function () {};
              NAMESPACE.Person.prototype.MethodB = function () {};
              NAMESPACE.Person.prototype.MethodC = function () {};
      
              NAMESPACE.Person._prototyped = true; 
          }
      }
      

      Explaining why: The reason for this is performance and inheritance. Prototyped properties are directly associated with the class (function) object rather than the instance, so they are traversable from the function reference alone. And because they are on the class, only a single object needs to exist, not one per instance.

      Allen : would it be NAMESPACE.Person._prototyped, or NAMESPACE.Person.prototype._prototyped

    Getting an object back from my GridView rows

    Basically, i want my object back...

    I have an Email object.

    public class Email{
        public string emailAddress;
        public bool primary;
        public int contactPoint;
        public int databasePrimaryKey;
    
        public Email(){}
    }
    

    In my usercontrol, i a list of Email objects.

    public List<Email> EmailCollection;
    

    And i'm binding this to a GridView inside my usercontrol.

    if(this.EmailCollection.Count > 0){
        this.GridView1.DataSource = EmailCollection;
        this.GridView1.DataBind();
    }
    

    It would be really awesome, if i could get an Email object back out of the GridView later.

    How do i do this?

    I'm also binding only some of the Email object's properties to the GridView as well and they're put into Item Templates.

    <Columns>
    
    <asp:TemplateField HeaderText="Email Address">
        <ItemTemplate>
            <asp:TextBox ID="TextBox1" runat="server" Text=<%# Eval("EmailAddress") %> Width=250px />
        </ItemTemplate>
    </asp:TemplateField>
    
    <asp:TemplateField HeaderText="Primary">
        <ItemTemplate>
            <asp:CheckBox runat="server" Checked=<%# Eval("PrimaryEmail") %> />
        </ItemTemplate>
    </asp:TemplateField>
    
    <asp:TemplateField HeaderText="Contact Point">
        <ItemTemplate>
            <CRM:QualDropDown runat="server" Type=ContactPoint InitialValue=<%# Eval("ContactPoint") %> />
        </ItemTemplate>
    </asp:TemplateField>
    
    </Columns>
    

    Can GridView even do this? Do i need to roll my own thing? It'd be really cool if it would do it for me.


    To elaborate more.

    I am saving the List collection into the viewstate.

    What I'm eventually trying to get to, is there will be a Save button somewhere in the control, which when the event fires I'd like to create an Email object from a datarow in the GridView which to compare to my original List collection. Then if there's a change, then I'd update that row in the database. I was thinking that if I could put a List collection into a GridView, then perhaps I could get it right back out.

    Perhaps I create a new constructor for my Email object which takes a DataRow? But then there's a lot of complexities that goes into that...

    From stackoverflow
    • ASP.NET Databinding is a one-way operation in terms of object manipulation. However, the DataSource property will contain a reference to your EmailCollection throughout the response:

      EmailCollection col = (EmailCollection)this.GridView1.DataSource;
      

      But I have a feeling that what you really want is a control that manipulates your EmailCollection based on user input and retrieve it in the next request. Not even webforms can fake that kind of statefulness out of the box.

    • If you want to hold onto an object like this its easiest to use the viewstate, although you will be duplicating the data but for a small object it should be ok.

      ViewState.Add("EmailObj", Email);
      
      EMail email = (Email)ViewState["EmailObj"];
      
    • Just a thought, basically a roll your own but not that tricky to do:

      Store the list that you use as a datasource in the viewstate or session, and have a hidden field in the gridview be the index or a key to the object that matches the row.

      In other words, each row in the gridview "knows" which email object in the list that it is based on.

    • Well I ended up looping through my List EmailCollection, which was saved into the ViewState.

      So in the page, a Save button is clicked, when the event is caught, I loop through my List Collection and grab the row from the GridView by index.

      On the GridViewRow I have to use a GridView1.Rows[i].Cells[j].FindControl("myControl1") then get the appropriate value from it, be it a check box, text box, or drop down list.

      I do see that a GridViewRow object has a DataItem property, which contains my Email object, but it's only available during the RowBound phase.

      Unfortunately If/When i need to expand upon this Email Collection later, by adding or removing columns, it'll take a few steps.

      protected void SaveButton_OnClick(object sender, EventArgs e){
          for (int i = 0; i < this.EmailCollection.Count; i++)
          {
              Email email = this.EmailCollection[i];
              GridViewRow row = this.GridView1.Rows[i];
      
              string gv_emailAddress = ((TextBox)row.Cells[0].FindControl("EmailAddress")).Text;
              if (email.EmailAddress != gv_emailAddress)
              {
                  email.EmailAddress = gv_emailAddress;
                  email.Updated = true;
              }
              ...
          }
      }
      

      I'd still be open to more efficient solutions.