Nerseus
*Experts*-
Posts
2607 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Events
Articles
Resources
Downloads
Gallery
Everything posted by Nerseus
-
Maybe I'd have to see your code, but you can certainly create a form from an abstract class that's not a nested class. When you try and view a form in the designer and your form inherits from an abstract class, that will give you an error within Visual Studio. That error is because Visual Studio wants to instantiate the base class and can't because it's abstract. But that error is only in Visual Studio - at runtime you can still create your form (that inherits from the abstract class). -ner
-
From this old thread I think you can create procs in Access, at least by using Code. I have no idea if there's a way to do it through the Access GUI. [edit]Here's another link I found by googling for: "CREATE PROC" Access http://www.devcity.net/PrintArticle.aspx?ArticleID=18[/edit] -ner
-
That is definitely true. You would normally only put code in a finally block that you're sure is NOT going to blow up, and that you want to run even if the code in the Try throws an exception (of course). I think wrapping individual lines with Try/Catch is the closest to a "Resume Next" you'll get. I can't for the life of me remember a single time I used Resume Next in VB6 or earlier and I wrote a LOT of code back then :) I used On Error Goto ErrHandler in almost every function with some semi-standard logging (building a fake stack trace) and "bubbling up" of errors. Typically, if I had a function where I wanted to run a line of code and wanted to continue even if there were errors, I'd wrap that code in a function. For example, if I had a function that converted a string to an int and then used it: Public Sub Test() Dim s As String = "123abc" Dim i As Integer = 0 ' Default to 0 in case Integer.Parse fails - I like to be explicit On Error Goto ErrHandler i = Integer.Parse(s) ' Do something with i Return ErrHandler: Resume Next End Sub I would rather write it like this: Public Sub Test() Dim s As String = "123abc" Dim i As Integer i = GetInt(s) ' Do something with i End Sub Public Function GetInt(ByVal s As String) As Integer Try Return Integer.Parse(s) Catch Return 0 End Try End Function That way the code in Test() is more readable and doesn't worry about errors. -ner
-
The main reason I see to want to move from SQL Server to Access if for portability. For example, to put your whole webserver on a laptop to take around and showoff (for product demos). I would strongly suggest using MSDE in place of SQL Server for something like that. It's a bit harder to setup, but mimics SQL Server in every regard (it isn't LIKE SQL Server, it IS SQL Server) except number of connections. It would be a painless switch whereas going to Access is likely to NOT go smoothly. -ner
-
Since it's off-top, I'll keep it short: There's an excellent reference on the difference between ReferenceEquals(), static Equals(), instance Equals() and operator== in C# in the book Effective C# - 50 specific ways to improve your C#. For operator== you normally only override for value types - and you should ALWAYS override it for value types, if you're going to compare your value types. You override it for performance only. For a reference type you would almost never override operator==. -ner
-
Seperate .Net 2.0 Threads from 1.1
Nerseus replied to Diesel's topic in Suggestions, Bugs, and Comments
I doubt we'll create a separate language-version forum. There's already enough "confusion" about which forum to use. Adding more will just mean looking in more forums for help. I think of the different languages similar to different Database questions. There's one DB forum, not one for SQL Server, Access, DB, Oracle, etc. A question like "How do I select names that start with Bob?" have different answers depending on the DB being used. Even worse - someone asks "How do I pass a value from one form to another?" and you answer with a C# snippet and they're mad because they use VB.NET (or vice versa). In my opinion, tough noogies! I think the trial and error system works best. Meaning, let users ask questions however they want. If they want an answer for C# 2.0, they should mention it. In time - maybe six months give or take - it will be assumed that questions are about .NET 2.0 and you'll have to mention if you only have .NET 1.1. Advanced users will post the right question the first time: "How would I iterate over a typed ArrayList in C# 2.0?". The other users will just have to wait for TWO replies until they learn. "How do I loop over an ArrayList" "Here's how (gives example using C# 2.0 features)" "Sorry, I'm using C# 1.1 and we don't have generics" "Oh, here you go (gives other example)" I liken the art of posting a question to the art of searching with a search engine. You'll get better the more you practice. -ner -
How to create index in MS ACCESS programatically
Nerseus replied to kaisersoze's topic in Database / XML / Reporting
You run a query that looks like this: CREATE INDEX [indexName] ON [TableName]([ColumnName]) There are probably some other forms, such as multiple columns if Access supports them. Check the MS Access help. -ner -
How to create index in MS ACCESS programatically
Nerseus replied to kaisersoze's topic in Database / XML / Reporting
You run a query that looks like this: CREATE INDEX [indexName] ON [TableName]([ColumnName]) There are probably some other forms, such as multiple columns if Access supports them. Check the MS Access help. -ner -
Early returns - I like them, if used "correctly". Obviously, if your program works - it works. The rest is window dressing. Some argue that early returns make code less readable. The arguement is that a function always flows to the bottom - hence one exit point. Having small functions goes hand in hand with that. Meaning, it would be Ok to have only one exit point per function if every function were so small that you could visually see all of the code and fit it in your brain at one time. If you happen to have a larger function then having some early returns to handle the exception cases makes sense. Or, if you have a function that does nothing but delegate to other functions, you could use early returns. For example, you may have a Validate() function that calls multiple other functions to handle all the details. If you choose to use bool testing on each function, you may having something like this: private bool Validate() { if (!ValidateName()) return false; if (!ValidateAddress()) return false; if (!ValidateEmail()) return false; return true; } Some OO programmers would say that code like that above should be broken down, I think it looks fine. The argument would be that each ValidateNNN() function would belong in the appropriate class. While I love programming with classes and breaking them down as much as possible, I also realize the real world constraints of delivering on time. So, it's a trade-off. Here's one last sample where I'd use an early return and others might not. This is a static function in a library class. It's meant to get a value out of a DataSet, substituting in a default value when the column is DBNull. NullDateTime is a constant. GetVal returns null if the column doesn't exist (another helper function). Technically, this could probably be refactored into two functions: the top half of the code handles the exception cases while the bottom half takes care of pulling out the DateTime value. Don't critisize the code, I'm sure it could be cleaned up even more. It's only to illustrate an "early - out" function. public static DateTime GetDateTime (DataRow dr, string column, DateTime defaultValue) { Debug.Assert(column!=null && column.Length>0); if(dr==null) return defaultValue; object o = GetVal(dr, column); if((o == null) || (o==System.DBNull.Value)) return NullDateTime; DateTime val; try { val = (DateTime)o; } catch { throw new Exception(String.Format("Cannot convert param to DateTime. Column={0}.{1}, Param={2}", dr.Table.TableName, column, o.ToString())); } return val; } -ner
-
Forum Is ALWAYS messed Up!!!!!!!!!!!!!!!
Nerseus replied to kurf's topic in Suggestions, Bugs, and Comments
If you're having to scroll every time and it's not a forum post, as ME suggested, take a screenshot and post it along with your browser info so we can help. -ner -
The standard TabPage doesn't support enabled/disabled, technically. The help says the property exists because it must (it inherits from Control which has an Enabled property). Since this isn't a bug, it won't get fixed. Now they may add support for it but you'll likely be better off doing it yourself or using a 3rd party control. To do it yourself, simply use something like a Panel on each TabPage and enable/disable the Panel. Maybe not perfect, but as I said I doubt MS will "fix" what isn't technically broken. I have seen the re-ording bug in VS, though not for some time. It appeared MUCH worse in VS 1.0. Now that I think about it, I haven't seen it at all (that I can recall) in VS 2003. My guess is that it was just a bug in their designer generating code, hopefully fixed by now. In some of our forms, we had code that would remove all tabpages and re-add them at run-time to make sure they were in the right order. As I said, I haven't seen this at all in VS 2003 so maybe it's been fixed? -ner
-
I like it :) I wonder if anyone has some code like: If Me.Exists() Then ' Do Something Else ' Who cares - I don't exist! End If -ner
-
A patient says "I seem to run out of breath quite easily and get these headaches all the time. Can you help?" The doctor says "Well you weight 280 pounds/20 stones/127 kilograms. You should probably follow a better diet and get some excersize." "Screw that, just tell me how to make my headaches go away." "Take some aspirin." Good luck with the aspirin, kcwallace. -ner
-
I wouldn't trust the output window for the ExitCode. If you test your EXE with a batch job (to test the exit code) or test it otherwise, using the first two examples should work fine. I couldn't get the "return NN;" to work even when Main was declared as returning int. I've used the "Environment.ExitCode = NN;" syntax and it works fine. -ner
-
Can we see the code you're using to do actions on the DB? You should be able to open a connection and use it to do multiple actions (INSERTs, UPDATEs, DELETEs, etc.). Are you using ADO.NET to handle transactions, or some other setup (Enterprise Services, the equivalent of COM+ or DTS) or maybe nothing in regards to transactions? -ner
-
Which question are you trying to solve, how to get Random numbers - but repeatable for testing - or how to get the given function to work without overflow? If you want repeatable random numbers, use the Random class. Here's some C#: Random rnd = new Random(1); for(int i=0; i<100; i++) { Debug.WriteLine(rnd.Next(10)); } Passing in a value to the constructor of Random forces the seed, which will result in "random" numbers that always follow the same rules. As far as I know, C# and VB.NET should behave the same. C# doesn't have any built-in "auto-truncate on overflow without exception" logic that I know. I would force some extra casting on some numbers to narrow scope. I'm not sure where that function came from, but it's doing some "big" math on numbers that may not hold everything. For example, pass in Integer.MaxValue for both x and y and what happens? Anyway, let us know more so we can help. -ner
-
Here's something I just tried - seems to work, though it may need tweaking for you. In your watchdog: using System; using System.Diagnostics; namespace MyWatchdog { internal sealed class EntryPoint { [sTAThread] static void Main() { Process process; do { process = Process.Start(@"c:\test.exe"); process.WaitForExit(); } while(process.ExitCode != 0); } } } This will keep running c:\test.exe until it returns 0 (success). The key is to now have your main program return it's status. Here's what I did: using System; namespace MyProject { internal sealed class EntryPoint { private EntryPoint() {} [sTAThread] static void Main() { System.Windows.Forms.Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(Application_ThreadException); System.Windows.Forms.Application.Run(new Form1()); } private static void Application_ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e) { System.Environment.ExitCode = 1; System.Windows.Forms.Application.Exit(); } } } The key is that the program now traps any unhandled errors in the event ThreadException. This handler simply sets the ExitCode and then shuts down with Application.Exit. This would be your place to cleanup any resources before shutting down. -ner
-
For the RETURN, I usually use RETURN 0 to indicate SUCCESS, but RETURN by itself works fine. Normally, a non-zero value indicates an error. It's typical to use "RETURN @@ERROR" in SQL Server. -ner
-
You have a lot of questions, so I'll try them one at a time: If you want to do this "right", I would start by looking at your main queries (or all of them if you don't have too many) and profiling them. For SQL Server, run this in Query Analyzer: set statistics io on set statistics profile on This will show what the optimizer is picking (which index, if any) and whether it's seeking or scanning tables. From that you can decide if you want a covering clustered index or just a regular index. Same as above - I'd do some checking first, unless you have a strong gut reaction that certain columns are always referenced in joins or where clauses. [qutoe]what does �bulk insert� mean A bulk insert is done through BCP (command line tool) or DTS in Bulk insert mode. Bulk inserts are very fast as they have options to skip a lot of integrity checks. For example, you can bulk insert rows with no constraint checks (you could end up with broking foreign keys). This wouldn't normally apply to your transactional database except for "batch jobs" or "interfaces" - programs or things that run off hours that do lots of DB reading/writing. It's not a matter of one at a time. When you have a clustered index on a table, the order of rows must match the order on the physical hard disk. So if you happen to create a cluster on a column whose value can be anything (versus an identity column), you take a chance that an insert could mean rearranging ALL the data in the table on the disk. Even if your cluster is on an identity column (plus other data), you take a slightly bigger hit on inserts than a non-clustered index. As long as the first column in your clustered index is something like an identity column, you should be Ok. You can get them with sp_help <table> or more specifically sp_helpindex <table>. -ner
-
Maybe we could see the whole project? Here's a sample I used that should mimic what you have. For me, running the one big loop or using what I think is your same comparison code, I get the same performance. I chose to do the drawing in OnPaint - maybe you did your graphics elsewhere? Just paste this into a new Form1 (replace everything) of a Windows C# project. using System; using System.Drawing; using System.Drawing.Drawing2D; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using System.Diagnostics; namespace DotNetForumsTest1 { public enum SquareType { Node, Link, Number } public enum LinkType { Unknown, Yes, No } internal class Square { public SquareType Type; public LinkType Link; public Rectangle Bounds; } public class Form1 : System.Windows.Forms.Form { Square[][] arrGrid = new Square[31][]; public Form1() { InitializeComponent(); for(int i = 0; i < arrGrid.Length; i++) { arrGrid[i] = new Square[51]; for(int j=0; j<arrGrid[i].Length; j++) { arrGrid[i][j] = new Square(); arrGrid[i][j].Type = SquareType.Node; arrGrid[i][j].Link = LinkType.Unknown; arrGrid[i][j].Bounds = new Rectangle(0, 0, 5, 5); } } } private void InitializeComponent() { this.Size = new System.Drawing.Size(300,300); this.Text = "Form1"; } [sTAThread] static void Main() { Application.Run(new Form1()); } protected override void OnPaint(PaintEventArgs e) { int last = Environment.TickCount; // Rectangle rectBounds = new Rectangle(0,0, 5, 5); // same size as those in the array; // for(int i = 0; i < 1581; i++) // { // e.Graphics.FillRectangle(Brushes.Black, rectBounds); // } for(int i = 0; i < arrGrid.Length; i++) { for(int j=0; j<arrGrid[i].Length; j++) { e.Graphics.FillRectangle(Brushes.Black, arrGrid[i][j].Bounds); } } Debug.WriteLine((Environment.TickCount - last).ToString()); } } } -ner
-
LOL: These complaints about the new C# features are the exact same ones people always have about changes to any language. Scan the posts in "Random Thoughts" for complaints on the changes between VB6 and VB.NET. Substitute the VB6/VB.NET complaints with C#1 and C#3 and the posts are identical. Here's a rephrased quote that works for any language upgrade - this should probably be made a sticky: The simple answer, as usual: if you don't like the new features then don't use them! If you're worried that others are using them, then stop worrying - you'll never be able to change how others program unless that's your job. -ner
-
Basically there's no "magic", just a convenience. Before someone had a one line if, all if's looked like: if (expression == true) { return true; } Someone said "hey, let's make it more convenient in certain circumstances": if (expression == true) return true; -ner
-
First, stored procedures can do all sorts of different things. They can (and often do) have some level of business rules in them. They can do simple INSERT, UPDATE, DELETE and SELECTs as well. More often, about 75% of the procs in a system are the "simple" kind. It's the 25% that we spend 80% of our development time. This includes searches, complex "gets", and reports. Depending on how you implemented your system, this 25% of 80% could also include the "save" procs. Here are some solutions I've seen/used in regards to saving data: 1. Have one proc per INSERT, UPDATE or DELETE. Requires an outside transaction (I've used COM in VB6 with MSDTC and ADO.NET transactions, among others). 2. Have a number of procs that each do more complex saving. Similar to #1. 3. Have a single "save all" proc. This was done with an XML string passed to a proc which used OPENXML (in SQL Server) to read the XML as a table. That was info to shed some light on my background as to how I'm going to answer the questions you asked. Since you mentioned procs, I assume you do not want to consider dynamic SQL. If you're like most, you immediately dismiss dynamic SQL for a number of reasons. You may want to read this article and some of the ones that spawned from it. Also, before I answer any questions about procs - or DB access at all - I'll focus a short paragraph on what I think the real question is: How do you best separate out the business layer from the DB layer? Since most business apps worth talking about save data to a relational database, our first inclination as developers is to try and get that data in a similar structure in our code. .NET has made DataSets and they are PERFECT for representing DB data in client-side or webserver code. They also have a ton of advantages such as binding, remembering original values, selecting filtered views of data, etc. That brings up a fundamental question of what do you store in a business object. You may also debate whether you need so many "layers" if the "business" object has a DataSet that's ripe for sending to the DB for updates. I would throw this out first: In my experience, working with "pure" objects for client side applications is FAR nicer (more maintainable with easier to read code) than working with data-centric objects. Once I moved to more OO programming on the client code, I could finally "see the light" as to why objects are more intuitive. Unfortunately, I have yet to see any good book on how to apply OO ideas to most business applications. In other words, how do you step from "I have an object that represents a noun - such as a tire and a car" to a real world application that has bizarre things like customer data, procedure codes, fees and other things. Regardless of whether you go with an OO solution or not, there's still the question of how you represent things on the screen and how you get them into the DB. Easy/short answer: I would *always* go for easier to understand code on a first phase of design. Go for the easy solution and don't worry about time to call the DB for gets or updates. Explanation: I make this assumption on fact: I'm a smart guy and I can figure things out. From that I assume that if my code does what it's supposed to, but it's slow, I can work on tweaking things later. If I wrote 100 stored procs and 80 of them are too slow because of individual calls instead of combining them, then I say "oh well" and I combine them. You could argue that spending time up front to "figure out" how to make them faster is worth it. I would say this: if you're smart enough to know how to make them faster up front then you would just do that - you wouldn't really need to plan for it. That "smartness" is really experience. If you don't have that experience, then I can bet that most of the time the planning to make things faster before you know what's slow is going to bit you in the butt. I think I hear a bad assumption being made from your friend. The argument your friend sounds like he assumes that a DataSet may be able to handle referential integrity "faster" because it prevents DB calls may be true, but it would be true even without DataSets, I would hope! This answer may be one of the easiest to answer. One of the main reasons to have more layers (n-tier) is to break up the dependency/coupling of those layers - to separate out what each layer does. From that point of view, I would hope that a DB change (table changes, etc.) would mostly just affect the stored procedures. Here's a thought: Why not use DataSets to get data from the DB. Use stored procedures, if that's your method, to get data out of the DB and into the DataSet. Whether that's one big proc call or a bunch of small ones - don't worry, just take a route and run with it. Return the DataSet to the client but wrap it in an object for later. When it's time to save, call a method on the object. Use whatever you want to save the DataSet - DataAdapter, individual Command objects, whatever. Now you've got a model to start with and you have your layers. The next step, in my mind, is to make that object useful. Give it some meat. Don't let the UI use the DataSet directly unless necessary. For example, most of the apps I work on have 5 or 6 meaty tables. The rest are lookups to populate drop-downs or enforce some kind of rule - the RI your friend mentioned. Of the 5 or 6 meaty tables, 1 is usually represented in a "header", 1 or 2 are tables with one row and 2 or 3 usually end up needing some kind of grid to allow adding/removing/editing rows. I would start by looking at your UI (or requirements spec if you have one) and encapsulating the non-grid tables. There's no reason single row tables have to be exposed via the whole dataset. A purist may argue that the UI shouldn't even know that a DataSet is involved. To that I saw "who cares?" - it's usually the same developer writing the object and the UI! Change the UI to use the object as much as possible. When you run into the grids, you more or less have to bind to something. It's kinda pointless to KNOW that you have a DataSet and yet only expose collections of objects that aren't condusive to binding, which is what the grid wants. Why rewrite all the code that the DataSet provides? To be pure OO? Poo on that, use the DataSet and save yourself some time. And that's my short answer on this subject. -ner
-
To extend PD's power-tool analogy, here is a true story: When I was younger I didn't have a circular saw - I only had a jigsaw (a small hand-held power saw with a tall skinny blade, used mainly to cut small curves). I had to trim half of an inch off of a door to make it fit my irregular doorframe. I decided to use the jigsaw. You can probably imagine just how bad the door looked when I was done - like the side of a ridged potato chip. Did it work? Oh yeah, that door was no longer too wide. It wasn't straight and it wasn't pretty, but it worked and I was proud. It also took about 45 minutes to cute 2.5 feet and ruined the blade, but it worked. Now that I'm older and more experienced, I would never in a million years try that again. I can see people creating classes and then using extenders to fix them instead of adding the functionality they need. I can already hear the arguments as to why they SHOULD be extended but they will likely be excuses rather than valid reasons. Unfortunately for purists, who want everyone to do everything "right", there will always be the issue of balance. At some point you have to give people the tools and let them make mistakes and learn from them. Programmers generally fall into two categories: those that want to be programmers and those that just do it because it's their job. I want to be a programmer and I want to be good. For me, that also means investing the time and energy to do it "right". I would guess that people that over extend (pun intended) the new functionality are the ones who program because it's their job. Even if you could go to their office and train them on the reasons why they shouldn't overuse extenders, they'll revert to overusing them because to them it's just a tool for finishing the project. These are also, unfortunately, the same type of programmers who generally leave a company with all of their crappy code behind that the "purists" have to clean up. But that's another issue. -ner
-
There is no "standard" method of validation as you yourself pointed out. For smaller forms/applications you may do just fine with checking a few controls and popping up a message box or changing a label caption. I'd also suggest looking at the ErrorProvider, which can place an icon next to the error'd field and creates a tooltip to explain the error. This is a bit easier than using a label to display the message as it allows you to check and display multiple problems (better for the user, harder for a programmer). My current project uses DataBinding to DataSets. All validation is done against the DataSet, not against controls, so that we can provide Web UIs as well as Windows UIs. We use XSL files that have tests written in XPath. The DataSet, as an XmlDocument, is transformed by the XSL to determine what's valid or not and display a report of the failed tests (produced by the XSL transform). In our case, we are writing XPath that would produce a "hit" when something fails. We happen to have the transform produce another DataSet of errors that can be used for a visual report as well as driving an ErrorProvider (for Windows) to indicate the bad fields. The error report can be used to jump to a control that has a problem by analyzing the error and the binding on the UI. There are two samples of validation, one very easy and one very complex. I've read how others have put all business rules right in their objects. In that scenario, the UI code (presumably Windows for you) would only interact with an object. The object would provide a means to validate it's current state and would have to provide the UI with a way of displaying what's wrong. In that scenario, the object handles all of the tests using whatever it has available. Meaning, if the object simply has a private field named lastName, your code would look much the same but check the private field rather than a UI textbox. This concept, among others, is discussed in the book Expert C# Business Objects, (also available for VB.NET). -ner