
Mike_R
Avatar/Signature-
Posts
316 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Events
Articles
Resources
Downloads
Gallery
Everything posted by Mike_R
-
Hi guys, I have recently switched over my coding from VB.NET to C#. Overall, everything is just fine. However, I'm a little puzzled over how namespaces work in C#. I have heard that they are much better, but best I can tell they simply seem stricter. (Or is this why they are better?) The two things that I've noticed are: (1) You cannot import a class (type) name as part of a 'using' import. This is not a big deal, but it does let VB.NET access enum fields directly without requiring the enum type name as a prefix. This is nice. On the other hand, this would allow any static method to be called without the type name. This could be good in that you could create methods that look like keywords (you can do this in VB), but I can also see why C# would want to prevent this. (2) You cannot seem to import a partial-name. This part is what I'm most confused about. For example, in VB I can import 'System.Windows' and then later can call 'Forms.MessageBox.Show("foo")'. In C# however, it seems a bit more all-or-nothing. I cannot imports 'System.Windows'; it seems that I can either import 'System.Windows.Forms' as a whole, or not at all. In general, it seems that if the namespace only contains other namespaces, but no types, I cannot import it -- I can only import a full path to a namespace that does contain at least one type. Or am I doing something wrong? Any thoughts or help here is much appreciated... Mike
-
Re: VB vs C# event handling Yes, *generally* this stuff is a good thing. I think that both the VB and C# teams have mostly made excellent decisions in terms of trade-offs. And it's not just VB that creates these artificial constructs behind the scenes. The things that come to mind that C# has implemented include: (a) 'yield return' iterators that maintain state via pointers behind the scenes while externally implementing the IEnumerable interface (b) Nullables, for which C# had to go through a lot of hurdles to achieve. C# had much more to do here than VB because VB makes some different assumptions with respect to value type auto-initialization than does C#, and, frankly, because C# took nullables further than did VB. © Anonymous methods, which maintain state via the construction of hidden classes under the hood. There must be many more examples like this, but this is what comes to the top of my head...
-
Hey Pickle, I don't have much to add here because I'm not very strong with threading. But just to be sure, I assume you tried the 'System.Timers.Timer' or 'System.Threading.Timer', as opposed to the 'System.Windows.Timer', I assume? The 'System.Windows.Timer' has to be innacurate because it operates through the massage pump... The others should definately be more accurate -- but I still don't know if they'll meet your 1ms accuracy requirement. Just a thought... Ignore all this if you've already covered this base. :) Mike
-
Re: AddHandler during constructor Actually, this is exactly what VB is doing. When a new object is set to the variable, extra code is emitted to release the old object via a 'RemoveHandler' call, then the new object is set, and then 'AddHandler' is called for the new object. This only occurs for variables declare 'WithEvents'. otherwise you can just do C#-style event handling by assigning the delegate directly. Ok, this makes sense. I thought so, but I wasn't 100% sure. Yeah, I agree on both counts: it sounds convoluted, but using it is very smooth. In fact, the handshake between the variable declared 'WithEvents' and the event handler declared 'Handles VariableName.Eventname' is hard to beat. But it's really only good for object-event pairings that are relatively static. But this makes it perfect for constrols on a form and many other situations... Again, Paul Vick has a very nice discussion of what VB is doing behind the scenes here: http://www.panopticoncentral.net/archive/2004/08/03/1536.aspx Mike
-
Re: Handles is just a concise AddHandler It's counter-intuitive because VB.NET is doing the AddHandler and RemoveHandler for you... So one should use a 'WithEvents' variable and let the compiler take care of it for you, or explicitly use AddHandler and RemoveHandler yourself. But not both! Paul Vick has a nice discussion of what VB is doing behind the scenes here: http://www.panopticoncentral.net/archive/2004/08/03/1536.aspx CJLeit, I dont have an IDE in front of me, so I cannot test this, but if you really want to use the 'WithEvents' construct here, you should be able to. You should first create a static field somewhere, declaring the variable 'WithEvents' and typed as 'System.Windows.Forms.Binding': Shared WithEvents MyTextBinding As System.Windows.Forms.Binding After that you should be able to use the dropdown controls within Visual Studio to find your 'MyTextBinding' field and then choose the 'Format' event. The IDE will then create a stub for you. (Or you probably can simply add 'Handles MyTextBinding.Format' to the end of your existing 'FormatName()' method. I'm not 100% sure of this, but my guess is that this would work.) Ok, lastly, something has to set your MyTextBinding to actually hold an object. So somewhere in your code you'll need something like this: MyTextBinding = textBox.DataBindings("Text") To be honest though, if you are comfortable using AddHandler and RemoveHandler, then doing so it probably more direct and clear. I would let the IDE/designer create 'WithEvents' and 'Handles' constructs for the form's controls, but otherwise, use whatever is cleaner and easier for you to use. Hope this helps... Mike
-
Ah, ok, now I see what you mean: you Assert if the input or output is outside the allowable range from a 'unit test' point of view. That makes perfect sense. Well, a separate set of unit tests would certainly be more robust (if one had the time!), but there's something nice about this "unit test" being built right into the code itself, if you will. Thanks Ner. :) Btw, I started scratching my head on this "Post-validation" idea of mine, wondering if it really did make any sense. I think that mostly it doesn't, but I looked through my code and found that I was using it where validating the input was an expensive check. In particular, I have some calls to COM objects that can fail (among other reasons) if the COM object's RCW is no longer valid (maybe some idiot called Marshal.ReleaseComObject() or the like) or if the RCW is valid, but is a invalid object (say the user closed the workbook to which you held a reference). Either of these cases is sure to cause a failure for any operation on that object. However, testing for an invalid COM object is expensive. And your code should never get into this situation in the first place, but if you have an exeption, then, well, this is certainly the first thing that I would check for. Since testing if the COM object is expensive, I test for this only in the Catch section. That is, I do "post-validation" checking not "pre-validation". Then if the object is invalid, the code throws an invalid argument exception. (And no, I don't pass along an inner exception; since the object itself was invalid, that's all the caller needs to know). On the other hand, if the COM object is valid, then, well, "something else" happend and I just call 'throw' so that the stack-trace remains in tact. Anyway, if any of this makes sense to anyone... :-\
-
Agreed... argument validation should happen up front. But in this case there is no inner exception to include in the error message. More and more I think I'm comfortable leaving the inner exceptions out of my argument exception classes. Ouch... Ok, I do agree that Argument validation should be done up front, no problem there. I was stretching the point to show that even if you did have an inner exception to provide with your argument execption, well, even then I can't see the point in providing it. Post-checking within a Catch execption is very normal, of course, it's just not normal for argument validation. Still, I think one could make the case for it, but that would side-track the issue here... Ugh, the inverse-logic ("report if false") mentality of Trace.Assert() drives me up the wall... For example, shouldn't your example be Trace.Assert(stringsArray != null, "stringsArray == null"); Maybe I'm a dolt, but I find this surpisingly tricky for me to get my head around. But syntax asside, that does sound pretty good. I think I achieve more or less the same by trapping the unhandled AppDomain.UnhandledException and Application.ThreadException. From there you can report a message box to the user, or just log it. But if you pre-validate and find a whoopsie, then Trace.Assert() on the spot does look pretty good. Well, if it's fatal, I guess. Yeah, that is a problem... What if the caller has error handling and will quietly handle a failure and try again, or do something else... One has to be certain that the failure is pretty fatal (ok, sure, most exceptions are) before you side-step the error handling and annoy the user, yeah? Yes, this is what I do. So much so that I've started to question inner exceptions at all... But I think I could see where say a file is locked (CAS access, or in used by another user, or...) and so one *could* throw, say, an InvalidOperationException, providing an inner exception along with it. But for invalid argument exeptions, I really have a hard time seeing the use for inner exceptions I think. What's the "new style"? (Sorry I don't use this command myself.) For me as well. I'm almost embarrassed to say how naked most of my code is. On the other hand, if it fails, it's generally because of bad inputs. Yeah, this sounds smart. Thanks for the tips guys...
-
Yes! That's exactly right... However, I'm questioning this approach, at least as far as argument execptions are concerned, such as ArgumentException, ArgumentNullException, ArgumentOutOfRangeExeption, etc. Here's the issue: of what value can the inner exception be if we know that the input(s) is/are invalid? The caller blew it, period. For example, let's take a look at two different versions of very simmilar code. The first one uses pre-validation checking: private void myMethod(string[] stringsArray) { // Prevalidation Checking: if (stringsArray == null) { throw new ArgumentNullException("The array passed in is null."); } else if (stringsArray.Length == 0) { throw new ArgumentOutOfRangeException("The array passed in is zero-length."); } // Arguments are valid, so run: try { // Your code goes here... } catch { // We don't need a Catch section at all: // But, just to be explicit: throw; } } So in the above, if we have an invalid argument, we throw an exception, but there is no inner exception to pass back to the caller. And to be honest, we don't need one: we simply tell the caller what went wrong, which is that the argument was no good. Ok, now let's look at post-validation: private void myMethod(string[] stringsArray) { try { // Your code goes here... } catch { if (stringsArray == null) { throw new ArgumentNullException("The array passed in is null."); } else if (stringsArray.Length == 0) { throw new ArgumentOutOfRangeException("The array passed in is zero-length."); } else { // No clue, so just re-throw... throw; } } } The above has the same exact functionality, at least as far as the caller is concerned. The caller will receive valid results, argument execptions, or some other exception for exactly the same reasons in both versions. However, in the 2nd version, we have the opportunity to provide an inner exception. And your code showed exactly how to do it. But the question is: should we? I'll copy what I wrote in the first post:
-
Ok, you make some very good points. Yeah, this does make some sense. I'll have to think about this some more. Still, I feel like there are scenarios where reporting the inner exception just cannot be helpful... Ok, yes, I often do the same. I call this "post-checking validaation" as opposed to "pre-checking validation". (I have not a clue if there is a standard name for this, or even if this is a standard thing to do...) But most of my code is naked, to be honest. Some does pre-validation, validating the argument values before executing any code. Some does post validation, wheby I run the code within a Try-Catch block and then only in the Catch section do I check for a null input or argument out of range. 99.9%+ the entries are correct, and so I prefer to only check for trouble when, well, I know there was trouble. :) But in this case, my error handling tends to look something like this: private void myMethod(string[] stringsArray) { try { // Your code goes here... } catch { if (stringsArray == null) { throw new ArgumentNullException("The array passed in is null."); } else if (stringsArray.Length == 0) { throw new ArgumentOutOfRangeException("The array passed in is zero-length."); } else { // No clue, so just re-throw... throw; } } } In the above, I report what I can, but if "something else" happened, then I just pass on the exception as if it were never handled via the 'throw' call. Yes, it could be file does not exist, string was not valid at all (bad format), CAS violation, who knows... So which argument do you call into which you pass the inner exception? I guess InvalidOperationException() sounds pretty good here. And I guess an inner exception could help clarify here. Ok, I think you sold me. :) Although, I'm still not sold on argument inputs. If I cleanly know that the "Argument is out of Range" or the "Argument is Null", what other inner exception information could possibly be relevant? Note though, that the System.ArgumentNullException and the System.ArgumentOutOfRangeException both have constructors that permit passing in an inner exception. But I'm having a hard time seeing the relevancy here. Hmmm, I'm not sure that we can get around this. Another aproach would be protected virtual methods that let you override the 'message', and/or other values. Actually, the 'Message' is an overridable property. This is a very good question though. I wonder why classes do not inherit the constructors? I think the issue must be that needing to hide a base class' constructor would be more frequent than wanting to pass it through exposed. For example, if an 'Employee' class inherited from the 'Person' class, the Employee class' constructor would need all the information that was required to make a new Person (FirstName, LastName, etc.) but would also require other info, like JobTitle, etc. So exposing the base class' constructor, which only includes FirstName and LastName would be a serious problem: it would allow the caller to create an incomplete/invalid class. So I think, more often than not, having the constructors be protected scope makes sense. It very well might not make as much sense for Exception classes, however, which more often than not have similar or identical construtors. But I don't think they can change the inheritance rules just for one set of classes, unfortunately...
-
Ok, I'm a little confused about something... I get the idea of Exceptions and the 'Inner Exception' or "innerEx", if you will. But the more I think about it, the more that I have trouble seeing any use for the 'innerEx' parameter for an exception class's constructor. For example, I have my own custom Exception classes, and I have dutifully provided an 'innerEx' parameter for all my constructors. But I think I have never, ever used them when throwing an exception. The reason is that either I know what went wrong, in which case I throw the correct error type without any 'innerEx' provided, or I don't know, in which case I just re-throw via a 'Throw' call without any parameters in order to preserve the error stack trace when eventually handled by the higher-level error handler. As an example, the System.ArgumentOutOfRangeException has a constructor overload that provides for a 'message' and an 'innerException'. Now, if the argument passed in is out of range, how can there be an inner exception at all? If the method is pre-checking the values and finds the argument out of range, then there can be no inner exception... ... I suppose that one could do post-validation, that is, don't pre-check, but surround the code with a Try-Catch-Finally block, and then only check for invalid arguments or the like within the Catch block. This is fine, and in this case there could theoretically be an inner exception to report. But would it make any sense to actually report that 'innerEx' here? For example, if the caller passed in, say, a null reference, or perhaps a negative integer where only positive numbers are valid, or some other invalid argument, then shouldn't the error itself be reported only? What purpose could it serve to report the error and in addition report the chaos this invalid argument caused when executed?? I was wondering what you all thought about this because I think I'm leaning towards removing the 'innerEx' parameters from the constructors to my exception classes. Certainly for the Argument-related exceptions. Any thoughts?
-
Since the order of operations is arbitrary, I think that it's a bad idea to depend on it with calls like array[i++] = i++ It's just asking for trouble. But because the order of operations is not necessarily "set in stone", then even this is ambigous (unfortunately): array = i++ I would prefer to read the above as the 'i++' occurring after the assignment is complete, but it would seem that this is not necessarily written in stone, and so it *could* go the other way -- as IceAzul would seem to prefer. (I'm not sure why though, I think the "++" happening *last* is the most intuitive to me, but maybe that's just me.) The other possibility is that the RHS then LHS evaluation order *IS* written in stone somewhere. Ok, not as far as the IL is concerned, but it could be a C# rule (or even C/C++ rule). Assuming that "RHS then LHS" is in-fact a rule, then having any '++' on the LHS is probably a bad idea in any situation, or at least certainly if the RHS has a reference to the same variable. Hmm, I wonder what happens here: Array = ++i; I'm guessing that the LHS is evaluated first, and then i is incremented, then the RHS. But maybe it's "++" precedence?? I doubt it, but it would be interesting. If this did occur then the LHS and RHS are in-fact not 100% independent. But I'm betting that they are... Personally, I would just "keep it clean" and avoid this nonsense! E.g.: i++; Array = i; Nice and simple. :)
-
Unfortunately, MS Office applications are not designed to be used Server-Side. I'd have a look at these links: Considerations for server-side Automation of Office Excel Access Violation -- Mike
-
In the right place Nullable Types are a really beautiful thing. You could make the equivalent on your own, it's not hard, but requires making a generic structure and then creating static widening and narrowing operators to handle the implicit and explicit conversions. The advantage of using the build-in Nullable generic structure is that you have the sort of 'Enum status check' that ME was showing (although only one status, really: either 'HasValue' = True or False), while allowing implicit and explicit conversions from the generic structure to-from the underlying type. But the best thing about Nullables, versus a "roll-your-own" approach, is that it is standarized. Everyone knows what to expect and how to use it. I would only "roll-my-own" if I needed greater functionality than the .NET 2.0 Nullables provide natively.
-
Ok, it sounds like you don not have full control over all the objects in question, particularly those of the .NET Framework itself, so requiring an interface would not help here. Your idea to overload "=" and "<>" would work, but you'd need to pre-define this for all objects in question against a custom class that you create. Do-able, but I don't know that it's worth the extra time to do all this just to create a "slick" 'Select Case' statement instead of a compound 'If..ElseIf...ElseIf...' statement. But this absolutely would work. The only real problem that I see with this is that this is such a non-standard use that anyone else looking at your code would likely be thrown. The "=" comparison is normally overloaded for structures and is meant to return 'True' in a cloned situation; however the "Is" operator is defined to return 'False' in a cloned situation and is a true object identifier. By overloading the "=" operator in this manner, you are redefining it to be an object identifier and so your Select Case statement, while clear to you, could easily throw other programmers when trying to read it. I realize that in C# the "==" operator is in-fact used for both structure equality and object identity, but VB.NET has other expectations, and I'm not sure that you should be changing it just for your own code. What you are trying to accomplish is usually done is to use 'Select Case True' and then put whatever condition you want within each check: Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim g As New abc Dim h As New abc Dim p As abc = h Select Case True Case p Is g Text = "p is g" Case p Is h Text = "p is h" End Select End Sub Admitedly, not as slick as what you were trying to do, but it's not bad and the operations themselves are exposed, not hidden away somewhere within an operator overload definition within the 'abc' class... Just a thought! I do like your thinking here, a lot, but Select Case is designed for value or "clone" equality, not object identification testing and your overloads "=" approach could really throw other people trying to read your code. Mike
-
Thanks, Marble, for trying some test code... So it looks like we did the same thing. I'll have to try that again, I might have been tired and did something silly. I'm swamped right now, but when I can, I'll try it in both C# and VB.NET and we'll see. If I get anything interesting, I'll post back... Thanks ME
-
Hi guys, I'm trying to get a method an invoke it, ignoring the case of the name passed in. The following works 100% fine: Dim procName As String procName = "MyMethod" '<-- Or whatever is to be called. Dim paramTypes As Type() ' I then set the paramTypes for the arguments properly...Ok, and then we make the following call: Dim methInfo As Reflection.MethodInfo = _ objectReference.GetType.GetMethod(procName, paramTypes) The above works fine. But the call is case-sensitive with respect to the procName, which is "MyMethod" in this case. Often it's hard to know up front if a method is named, say, "SubString" or "Substring" and here it would matter. A lot. As a VB programmer this case sensitivity is more than a little annoying. So I then attempted the following, which makes use of binding flags, in particular the 'System.Reflection.BindingFlags.IgnoreCase' binding flag: Dim methInfo As Reflection.MethodInfo = _ objectReference.GetType.GetMethod( _ procName, _ System.Reflection.BindingFlags.IgnoreCase Or _ System.Reflection.BindingFlags.Public Or _ System.Reflection.BindingFlags.Instance, _ Type.DefaultBinder, _ paramTypes, _ Nothing) The above works perfectly too... However, it does *NOT* ignore case! This would seem to be the point of 'BindingFlags.IgnoreCase', right?? Now this might not be a brilliant thing for me to be doing, for I assume that a C# assembly could have two methods with the same name only differing by case. (Although, they should not both be public in this situation, but theoretically could be.) I really wonder how VB would deal with calling into a C# assembly in this situation with early-bound calls, never mind late bound. But for the moment, I'm willing to assume that the call to the Method is safe as a case-insensitive call. So any ideas on how I can get 'System.Reflection.BindingFlags.IgnoreCase' to work? Thanks all in advance... Mike
-
I don't have a clue, myself, really. (Sorry.) Just thinking out loud, is there a BeforeRightClick() event or the like you can hook?
-
I had not viewed the original intent of this debate as being "techonolgy-limited", but given the restraint that there is only way, the issue would seem settled. :) Well, even if it does not use the original matter to reconstitute the person, I think we can still safely view it as the "same" person in this case. As I've explained before, we can make a far-from-perfect copy and still be forced to consider it the "same". We do so every day. So this quantum-level copy process would most definately fit the bill. And that it forces us to destroy the original in the process is a bonus in that it eliminates any ethical dilemmas! Well, as I said before, I am willing to go as precise as required. If the blueprint must be measured to the sub-atomic / quantum level, then that's what what has to be done. I have no issues with this at all. If however, we can achive required level of "blue-print creation" (at whatever level of detail is required to successfully reconstitute later), without actually destroying the original, it is at this point that we have some real head-scratching to do in terms of considering which is the "real" one. In my opinion, they would both have equal claim to being the "real Captain Kirk" in this scenario.
-
That's a really interesting discussion, PD. The followup, here, shows that they were basically forced to leave this conflict as is, or certainly for now.
-
Actually, I get your point, I do. By teleporting in the manner you described I have no problem accepting your conclusion: it is the same matter in the same exact configuration, so I cannot see how we can possibly say anything but that it is the same object and it is the same "consciousness". But I do believe that we can loosen our standards, however, and still call the object "the same". My point is that we do so every day. For example, I am currently down in Florida with my parents, whom I had not seen in 6 mos. or so. When we see each other we say things like "it's so good to see you" or "you look great" or whatever... The weird thing is though, there is almost no doubt that my "parents" have had nearly full molecular replacement over the 6 mos. since I've seen them last, as have I. Certainly all the skin and eye epithelium are all 100% replaced (probably weekly). The only molecules that would be the same, certainly the only visible molecules, would be the lower portion of their hair and nails. That's it. That's all that is really the "same", at least at the molecular level. So I accept your argument in the sense that the Entanglement-level of teleportation does reproduce the same object. The same matter and the same blueprint could only be judged as the "same". However, I do believe that there are a couple of other issues here: (1) Theoretically, a scanner could read the human body at the quantum level (or at whatever level of detail that would be required) and then reproduce the body to that level of specification using other matter. You have proposed that this is impossible, and that no other mechanism other than entanglement has ever been done, but this is not the same statement as proving that any other mechanism other than entanglement is impossible. If you prove it -- or if we agree to assume it -- then your statement stands uncontroverted. (2) Even accepting the "Entanglement is the only way" proposition, there are still some interesting issues to consider with respect to "who we are" and what it means to be the "same". I think I've put together some interesting scenarios regarding "Molecular Replacement", but I guess that maybe I'm the only one that thinks so. Well, I tend to agree. I did hypothesize, above, that a defibrillator or the like might be necessary on the other side to jump-start the system. We might need to jump start the brain too, or we could get the heart going, but the person would be stuck in a coma. Or maybe an atomic-level blueprint is indeed too crude. But I'm not stuck on atoms and molecules. Honest. I view the generalized procedure to be to read the "blueprint" at whatever level of detail is required. This may well be at the sub-atomic or quantum level, that's fine. I am willing to relax the assumption, however, that the *same* matter be sent to the other side. Your Entanglement teleportation approach may not permit this, but theoretically, I can envision a process where the body is read, creating the blueprint, and then the blueprint is transmitted to the other side, where a new person is created according to the blueprint, but using other matter. If Entangtement or some other theory proves that such a scheme is impossible, well, then I guess the debate is over. I just don't have the background in physics to know one way or the other... But it certainly sounds possible in theory, no?
-
Excel/MyApp Threading Issue, pls help
Mike_R replied to q1w2e3r4t7's topic in Interoperation / Office Integration
This is a really interesting issue. I've never seen this before. On the other hand, I don't use Excel Events much using .NET.... Your idea to use a Delegate to pierce from one thread to the other is exactly the right idea. Nice job. I don't really know why this is happening in the first place, however. My guesses are that either: (a) COM events in Excel are handled in a different thread?, or (b) It's possible that because you are using out of process automation that the event call-back is marshalled through a different thread. That is, I don't know if this would be hapening from within an in-process Managed COM Add-in? The other thing is that because you are using WithEvents variables, you will likely have trouble releasing your Excel Application instance. You should consider either: (a) Using AddHandler and RemoveHandler to hook the events instead of 'WithEvents', or (b) If you use 'WithEvents' variables as you are now, then you should use of a cleanup routine similar to the following: Sub CloseUpShop() GC.Collect() GC.WaitForPendingFinalizers() ReleaseAnyCOMObject(CObj(ws)) wb.Close(SaveChanges:=False) ReleaseAnyCOMObject(CObj(wb)) xlApp.Quit() ReleaseAnyCOMObject(CObj(xlApp)) End Sub Sub ReleaseAnyCOMObject(ByRef o As Object) Dim tempVar As Object = o o = Nothing Marshal.FinalReleaseComObject(tempVar) tempVar = Nothing End Sub I hope this helps... And let us know how it goes! Mike -
You know, we can "teleport" from one place to the other all the time. Just get in a car, or walk across the room, even. The matter is transported from one place to the other with the molecules, atoms and sub-atomic particles all in the exact same places. It's only when we get to the "breaking apart" and then "re-assembly" part that this starts to get theoretically interesting. But what is interesting, is that we can transport ourselves across distance, but with DIFFERENT matter and still be considered the same. Your quanta-level replication is not necessary. For example, if I walked for a week and covered 200 miles distance I would have transported myself 200 miles. And the molecular replacement over that time would have been considerable as I drank and ate, replacing water and cells that died off. Give it a year and the molecular replacement would be 100% or very near 100%. But molecular replacement over even a two week period is probably rather high. So your requirement for "sameness" is too strict. Or alternatively, your definition of sameness could be called "strict sameness" as compared to the sloppy "layman's sameness", which allows for full molecular replacement and yet as laymen, we still consider ourselves to be the "same" from day to day or year to year. But if we loosten up our definition of "sameness" to be "layman's sameness," then things start to get interesting. Teleportation no longer required to be exact to the quanta level. The same types of molecules in the right positions ("laymans sameness") is sufficient. So now we can talk about telporting molecules, however crude you may find that, and its enough. Then we can take the next step of not even sending the original's molecules (because even 100% molecular replacement is allowed under "layman's sameness") and produce the object on the other side. And since we no longer send the original matter along with the blueprint, this is now really more like a "replicator". That is, we don't even have to destroy the original if we don't want to. We can make clones of ourselves instantly in other locations. Under the rules of "layman's sameness", these beings are all the same and would all insist that they are the "real Captain Kirk". But which one would be right? The answer: they all would be. Even if the original tried to claim that only he was the "real Captain Kirk" due to containing not only the exact blueprint, but also the exact matter (and therefore the only one with the exact quanta positioning), he could not maintain that claim a year later, maybe not even a few weeks later. They would all be clones. But by "layman's sameness" they would also all be the original.
-
Actually, your steps are quite specific to a given implementation. That there "really are only two steps" could be more generically written as: (1) Observe the state of all *parts* at the granularity required, be it molecules, atoms, and/or sub-atomic particles. (2) Transmit that blueprint and reconstruct the original object using the blueprint using like *parts*, be they molecules, atoms and/or sub-atomic particles. Actually destroying the original object is optional, as is actually transmitting any matter either as matter, or transformed as energy. Ok, well this is interesting... I didn't know that teleporation of any sort had been performed. Are you sure? In any case this is new information -- and very cool at that, but does not really change the theoretical debate, does it? The original theoretical question you proposed was the following: In short, your brother is right. As I explained earlier, we "teleport in place" all the time. The illusion of self-consistency is just that: an illusion. This illusion is maintained by the fact that our memories are "teleported in place" as well, and the reproduction mechanism is so exact, and the change so gradual, that we ourselves and all others around us do not perceive the change and recognize us as the same. But we are not. We are not the same matter, and we are only "very close" to the same blueprint from one point in time to the next. So teleporting with the same exact matter is not necessary. And once you make that step, you start to realize that you do not have to destroy the original at all. So you can "teleport" a clone, while killing the original is totally optional and most definately would be murder. If you created a "clone" without killing the original in this manner, they will both feel like they are the original, and they would both be equally correct or equally wrong, depending on your definition of "same". (If your definition is "same-blueprint", then they are both correct, if the definition is "same matter" then they are both wrong; or wait a week or so and one will be 100% wrong and the other 98%+ wrong at that point...)
-
Well, conceptually we're talking about the same thing. If the atomic-level is too crude, then the blue-print has to be at "quantum level" then that's fine. What I meant by electrical impulses was the position of electrical charges and therefore electron positions, so I was going beyond simple atomic-construction. But no matter, one needs sufficient detail for the blueprint, whatever that level of detail might be. The next issue is transporting the actual matter of the original person (or not). The options I was suggesting were: (1) Break apart the original down to atoms and transport them, then re-assemble on the other side according to the blueprint. (2) Convert the original matter to energy and beam it over, re-convert to matter, and then re-assemble according to the blueprint. (3) Don't transport the matter at all in any form, and simply re-assemble the object on the other side using *other* matter, that is identical down to the quantum level, as required. (All this ignores issues with the Heisenberg uncertainty principle of course, which we can abstract away for the purpose of this theoretical exercise, but in reality could be a problem depending on how precise the sub-atomic particle placement and energy potentials must be measured and re-created.) That "transforming the matter to energy and back is impractical" is of no importance, as any of this is all impossible and theoretical to begin with. The real point I was trying to make is that, effectively, choice #1 and choice #3, above, really are identical if you think about it, and the way of getting one's head around that fact is to realize that choice #2 is a middle-ground between the two.
-
Some cells, may not die, but the atoms and molecules will still rotate out. Water in particular is going to cycle rapidly. My bet it that the calcium in the bones is the most static, possibly permanent. But if certain nerve or brain cells could be longer, you could be right, I really don't know.