Nerseus
*Experts*-
Posts
2607 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Events
Articles
Resources
Downloads
Gallery
Everything posted by Nerseus
-
If you don't want to use the DataAdapter you don't have to - but you can still use the DataSet for binding and detecting changes. Assuming you get back a DataTable from your Select method, you can still bind a grid to the DataTable/DataSet. I'm not sure which grid you're using, but most grids will bind directly to the DataSet and make the changes there. The DataSet has a HasChanges property. It also has a GetChanges() method that will give you a trimmed down version of your DataSet that just has the changed rows (Updated, Inserted and Deleted). The DataRow knows the original values and the modified values automatically. With all that information in the DataSet, you should be able to write some generic code to loop through the changed rows and build whatever type of code you need to use your own Execute method. As always, if you need specific help, just let us know what your execute object expects. If it simply want's some dynamic SQL then I'd think about investing a half day or so to get used to the DataAdapter - abandoning a bad design in favor of a robust one is always worth it. I've used MySQL with the .Net Connector and it's very easy. Plus, the .Net Connector has full source available in C# if you wanted to look at it. -ner
-
@lidds: If your variable is an object that is declared as "new", you can simply create another object, as in: Dim o As ThirdPartyObject Dim i As Integer For i = 0 To 10 o = New ThirdPartyObject() ' Use new instance of o here Next or in C# ThirdPartyObject o; for(int i = 0; i<=10; i++) { o = new ThirdPartyObject(); // Use new instance of o here } -ner
-
In SQL Server with a clustered index, the engine stores rows on the disk in the order defined by the clustered index. The clustered index does not have to match the primary key, but often does. If you had a clustered index then inserting "row 2" where 4 million rows currently exist (1, followed by 3 through 4 million) would take a LONG time as the engine would have to move rows 3 through 4 million "up" on the disk to make room for row 2. To go back to your original question - you need to identify and delete duplicate rows but you can't modify the table? The only solution that comes to mind is to use a cursor to loop over the rows and delete inside the loop. That seems really nasty compared to adding a column for a one time cleanup task - you could always remove the column if you're worried it will cause issues. -ner
-
Selecting multiple columns into a single variable.
Nerseus replied to mike55's topic in Database / XML / Reporting
@mike55: If you don't need the columns separated out for any reasons, I'd do it just the way you are, using + or whatever your DB uses for concatenation. You may want/need to wrap each column in case the values are null. In most databases, if you try to concatenate 2 strings and one is null, the result is null. In SQL Server it would look like this: SELECT IsNull(column1, '') + ', ' + IsNull(column2, '') + ', ' + IsNull(column3, '') as Exp1 FROM Table1 You can also use COALESCE (maybe with 2 L's?) instead of IsNull if you prefer. -ner -
If this were me, I'd add an "identity" column to your table first. You can make this field up yourself if you want - just make sure it's a unique value for every row. Then you're left with deleting the duplicates. That shouldn't be that hard. What I generally do is something like the following. This assumes you have two fields you want to match on, FirstName and LastName: SELECT MIN(MyNewID) AS MyNewID, FirstName, LastName INTO #temp FROM Table1 GROUP BY FirstName, LastName DELETE FROM Table1 WHERE MyNewID NOT IN (SELECT MyNewID FROM #temp) This puts all matching records in a temp table (works in SQL Server, not sure about DB2). The GROUP BY will get you a single row - you said you don't care which one. The MIN(MyNewID) is to, again, get a single ID per matching set of values. I chose MIN but you could use any aggregate function. You said it didn't matter, so I prefer MIN - it may find it faster if the table has an index. If you CAN'T add a column to the existing table, I'd suggest first copying the table to a new table where you CAN add the unique column. Do your cleanup there then TRUNCATE the original table and copy everything back. -ner
-
What you describe would be a WinForms application that uses some of the Web or maybe FTP classes to download information to a local machine. I've used the framework classes to download files from webservers and FTP servers, and I think there are a number of links on this forum. If you don't want users seeing the URL to your files, I wouldn't let them have direct access at all. I'd provide a webmethod (asmx) or webpage (aspx) that can stream the content to the user. Your program can pass up a filename only (or folder/filename) to a webmethod or the filename embedded in the URL (http: //web.com/page.aspx?filename=blah.txt) and do whatever validation you need then stream the file down. With some extra work you could track their session to prevent them from trying to download more than one at a time. -ner
-
Easiest Way to Make DB App Read Only?
Nerseus replied to mjb3030's topic in Database / XML / Reporting
Can you tell us where the error occurs? My guess is you're getting the error when openeing the connection, before any DataAdapters are created or used. Access is pretty particular about opening the database file. I'm not an Access guru, but I thought I remember seeing a special param on a connection string that signifies you don't want exclusive access. Maybe try that? It may simply be that you can't have the DB file readonly. -ner -
Our current DBA and a guy from Microsoft SQL Server team recommends NEVER allowing NULLs on columns that would be indexed/searched. Indexes can't account for NULLs very well. From that advice, we've gone to using both end dates and active fields and it's quite nice. The experts offer an alternative which is to always set the expiration date. For rows where it's not expired set it to a "max" date so that queries always return rows. Personally, I like the true/false field as it makes the code easier to read. Now this knowledge is for SQL Server where bit fields are much faster than dates, especially dates with nulls. In Access, I don't know about the performance of a true/false field. Internally I'd hope it's a bool value of some kind. All the above having been said, we're talking about Access. I really wouldn't worry about performance hardly at all there. If you're considering performance at the detail level, then maybe Access isn't the right DB. -ner
-
Problem with alter table stored procedure...
Nerseus replied to lidds's topic in Database / XML / Reporting
You'll have to use some dynamic SQL for this to work and manually piece in the column name. No guarantees that will work but that's what I'd try. It would be something like this: CREATE PROCEDURE [dbo].[spAddColumn] @columnName as varChar(50) AS DECLARE @GoodTableName varchar(255) SET @GoodTableName = '[' + REPLACE(@columnName, '''', '''''') + ']' exec ('ALTER TABLE myTbl ADD ' + @GoodTableName+ ' varChar(50)') -ner -
If you want a guaranteed date format, you have to provide the string. For example: date.ToString("dd/MM/yyyy hh:mm:ss") If you want am/pm at the end, try: date.ToString("dd/MM/yyyy hh:mm:ss tt") There are other format strings available - check the MSDN documentation for Custom DateTime Format Strings. Using ToShortDateString will get you a date formatted according to the OS's rules. You may abe able to override that with a custom DateTimeFormatInfo object, but I tend to prefer the custom string. -ner
-
I worked at a company that called them programming opportunities. I always liked that. -ner
-
MySQL - Inserting multiple records at the same time.
Nerseus replied to EFileTahi-A's topic in Database / XML / Reporting
I read on their main page that they handle ACID since version 3.x (can't recall). That means that one user shouldn't have an issue with another user, and you can be guaranteed that a set of inserts or updates can commit as one unit, if you want. -ner -
Easiest and cheapest solution is to change your expectation to match what your program does. Maybe you like unhandled exceptions, a bit of chaos in an otherwise unchanging life? This way, nothing is wrong. -ner
-
For #1 and #2, I have no idea - sounds like a good idea if it's not there already. Why not give a little more control over the default naming conventions used on newly added controls? For #3, you can definitely modify the "template" files that VS uses when you add a form (for example). Just have it create one file instead of two and don't use the word partial. Personally, I think partials in the case of designer-generated code are a GREAT idea. They have potential to be abused by people who just want their files "clean" by separating logic into separate files. It's the same with people who over-use regions (I used to be one of them). Regions and partial classes have their uses, but can be easily abused by us developers who obsess with "cleanliness". If you're not one of them, I'd suggest using partials for the designer generated code and see how you like it. I can't imaging having that one function, InitializeComponent, is too confusing? -ner
-
Yes, you can do MySQL with ASP.NET. I can't compare PHP to .NET but remember that .NET is not interpreted, it is JIT'ed. That means the code is compiled as needed. A DLL stores the IL code. When you run the program, the .NET engine loads in the code as needed and compiles it on the fly. If you have a large function in your DLL but it's never called, it won't get compiled. JIT is for "just in time". So you DO take a small hit to load the IL and have the engine compile it on the fly (just in time), but the execution after that is as fast as any compiled code. This is different than VB6 and prior, which always interpreted every line of code. So if you had an "if" inside of a loop, it would need to be interpreted every time through. This is not the case with .NET. I can't speak for PHP, I know nothing of it. -ner
-
The most important thing to keep in mind is that programming is easier when the code expresses what you're thinking. Ideally, you want code that looks like pseudo-code in terms of readability. When it's that easy to understand then you're less likely to have errors in your code. Most of the latest advancements in .NET, and programming in general, are heading in that direction. For example, refactoring tools aren't just hype - they help make code more readable. More readable means less errors (hopefully). Performance is definitely the trade-off for all of this so it's good to keep in mind. You wouldn't do something that's obviously bad performance, but you also don't need the old school style of thinking where you must design for performance, CPU cycles or even memory consumption. This is all for "normal" business programming of course. For real-time programs, games, handheld devices and other specialty areas you may have other priorities. -nerseus
-
I assume you're asking about the physical DVD? I would think it would be released very soon. I use Alcohol 120% to make the ISO into a mapped drive and it works great. You can also use daemon tools which is free. They both work the same way, creating a virtual drive (you pick the drive letter, shows up as a standard windows drive) from an ISO (and other formats). You can then run setup.exe right from there. Alcohol allows you burn the image to a CD/DVD. I haven't played with RC1 yet. I wanted to install it last night but caught up doing other things. I loved the renaming feature of Beta 2 (rename a control, get prompted to change ALL of your code related to the control). I never had any corruption but I always made sure to compile my project successfully before a name change. I assumed VS used the compiled info to know where your control was referenced. -ner
-
Just talked with a coworker today who had watched the C# 3.0 show, on LINQ. Holy wow - what an idea! They're adding the ability to query just about anything, including your compiled objects. With SQL you could query a table: SELECT FirstName, LastName FROM Customer WHERE LastName = 'SMITH' Now they're making it so that you can query your object. Imagine having a collection of Customer objects with FirstName and LastName string properties - write the same query to get a list of objects. Now take this SQL: SELECT FirstName, LastName INTO #temp FROM Customer WHERE LastName = 'SMITH' This creates a table on the fly, called #temp, with just two columns. They have that in C# 3.0. It will create a new object on the fly, with just the properties you wanted. And, it will give you intellisense at design time - now that's just crazy! I haven't looked into it much yet, but imagine the usefulness of a DataSet - storing data and relationship with possible cascading events, expression columns, Data "views", querying with a filter expression, etc. Now take that idea and apply it to objects. Gotta go - I must install VS 2005 RC1 tonight and only so much time in the day... -ner
-
There are a couple of known bugs that will cause memory leaks in aspnet_wp.exe, I'd check htt://support.microsoft.com for info there. I've only used memory tools for local projects - haven't had the need to do so on a webserver. If you search Google for something like "memory leak ASP.NET" or just "memory leak .NET" you get a lot of good hits. I did have one ASP.NET app that was, by itself, killing the webserver by eating up too much memory. It was a webpage that was trying to load a DataSet with about 6 million rows (bad WHERE clause, BAD!). Combine all that memory with the fact that we had the threshold for memory set too high and the machine would just hang. You can specify a value on the webserver that will reset IIS (or maybe it just kills aspnet_wp.exe, I can't remember) when memory hits a certain upper limit. On our DEV server this was something like 99999999999 or the equivalent (our IS guy laughed when he saw it). -ner
-
[2005] ComboBox_SelectedIndexChanged Errors... how to prevent *properly*?
Nerseus replied to Denaes's topic in Windows Forms
ColumnChanged will fire even if a value stays the same, but only when it's "committed" to the dataset. So if you're in a textbox, it won't fire on every keypress - not until you tab out (by default) and the change is sent from the textbox to the bound dataset. So if you have the name "bob" in a textbox and backspace over the last "b" then retype it, it should fire ColumnChanged again (I think - you'll have to test). If you have a value in a DataSet and set it to itself, that will also call a columnchanged event. As for formatting data before it goes into the DataSet and back into the control, you'll want to use the Parse and Format events. In this post I have a one line snippet. You can find a lot more info on these events in the help, which also includes sample code. Since it fires an event, you could even change how you want to format values to/from your dataset. -ner -
If I were given that in an interview I'd have asked "Seriously?" There's no point in giving that in an interview. If they said "Yes, seriously" then I'd have walked out. -ner
-
[2005] ComboBox_SelectedIndexChanged Errors... how to prevent *properly*?
Nerseus replied to Denaes's topic in Windows Forms
If I jump ahead, I believe you're saying that you manually move through the Binding collection yourself (next/prev) and you want the SelectedIndexChanged to fire then? If so, I'd recommend something a bit different - but first a question. What are you doing in the SelectedIndexChanged event? Normally you'd apply some filtering logic on a second combo when the first one changes it's value. If that's the case, then I'd recommend NOT handling the filtering on the control's event - it ties the changing of another value (or a filtering of another combo) to the UI related code. Rather, use a ColumnChanged event. Here's a pseudo-code of what I guess you have right now: 1. User presses Next 2. Bump the Position property of the BindingContext (or similar) 3. The binding automatically changes a value in a combobox, which triggers SelectedIndexChanged 4. In SelectedIndexChanged you filter a second combo Here's how I'd change this: 1. User presses Next 2. Bump the Position property of the BindingContext (or similar) 3. The binding automatically changes a value in a DataSet, which triggers ColumnChanged 4. In ColumnChanged you filter a second combo This works off the data, not the controls. As a general rule, I try to put as little as possible in my events. The only code I usually need is "immediate update" code. For example, in a SelectedIndexChanged event I may take the value out of the control and update the dataset manually. I do this for user feedback so that when the user pulls down the combo and selects a value, it updates the dataset immediately and triggers any changes they want to see (calculated fees, other combos that filter, controls that disable, etc.). -ner -
[2005] ComboBox_SelectedIndexChanged Errors... how to prevent *properly*?
Nerseus replied to Denaes's topic in Windows Forms
I know of three workarounds. The first seems to be the most common, but I don't like it as much: 1. Add a class level variable (bool). Set to true before binding, false when done. Check this variable in the event and exit if true (skips the SelectedIndexChanged event while loading/binding). 2. Put this at the top of the event: if (!((Control)sender).ContainsFocus) return; 3. Put this at the top of the event: if (this.ActiveControl != sender) return; The second two work on loading. They basically say that if the user wasn't on that control, making the index change, then ignore it. -ner -
Using a foreach is preferred over a custom for loop. The main reason is readability with a second benefit being it's more "error proof". The times you can't use a foreach is whenever your loop needs to modify the collection/array. Performance used to be a factor: the for loop was traditionally faster. Now, .NET makes the equally as fast. Plus, in most applications the speed of a foreach loop - even if it WERE slower - would be so small that it wouldn't matter. You can read a better argument in Effective C#: 50 Specific Ways to Improve Your C#. The same argument holds for VB.NET. I have only needed a for loop in two circumstances, one by far the more common - deleting rows from a DataTable. The idea is this: DataRow[] matchingRows = ds.Tables[0].Select("column = 'value'"); for(int i=matchingRows.Length - 1; i>=0; i--) { matchingRows.Delete(); } In the above circumstance, you can't use foreach because you're deleting a row inside the loop. The foreach requires that the structure can't be added to or deleted from, but can be changed (sorta). -ner
-
The beta 2 of Visual Studio was by far the most stable I'd seen - display problems plagued earlier versions. In fact, one of the earlier versions wouldn't even let me resize controls on a form without causing an Exception within Visual Studio. I had to tweak things manually. I haven't played much with SQL Server - just downloaded/installed the September CTP at home. But your assumptions are likely right - with a release due out in November, there can't be that many changes. If this is a big enough project, you can contact MS to get on their "early adopter" program. They have special perks for companies that use their technology early. There are restrictions, the biggest (last I heard) was that you had to "promise" to implement the solution within a year of the official release. So if your product is going to production before November 2006, you may be a candidate. -ner