Illusory Follies Andrew Flanagan's Blog

30Mar/090

C++ from Python

I was impressed today to see how easy it was to call a C++ DLL from Python. I got the following information from another site:

1. Create a file called dlltest.cpp and write a function that sums two numbers and returns the result:

      //dlltest.cpp
      #define DLLEXPORT extern "C" __declspec(dllexport)
 
      DLLEXPORT int sum(int a, int b) {
          return a + b;
      }

The extern "C" construct tells the compiler that the function is a C function. It also removes the decorations from the functions names in the DLL.
__declspec(dllexport) adds the export directive to the object file so you do not need to use a .def file.
2. Include the header of the function in dlltest.h:

      //dlltest.h
      int sum(int, int);

3. Create a new Dinamic-Link Library project and include the two files, compile, and create the DLL.
4. You can now use Dependency Walker to see the list of the exported functions. You should see here the sum function.
5. Move the DLL in the Python folder or use

      >>> import sys
      >>> sys.path.append(r"C:\path\of\dll")

to include the DLL folder in the list of Python folders.

6. Use the ctypes module to access the DLL:

      >>> from ctypes import *
      >>>mydll = cdll.dlltest
      >>> mydll

Note: ctype module is already included from Python 2.5. If you are using an older version you can download ctypes here.
7. Now call the function:

      >>> sum = mydll.sum
      >>> sum
      <_FuncPtr object at 0x0097DBE8>
      >>> sum(5, 3)
      8

Reposted from here... (Thanks!)

I need to get into Python more -- I've used Ruby a bit but have tended to ignore Python simply because I've not seen it is needed. Evidently, I need more side projects.

19Mar/090

No Way…

I think I actually got a snippet of Symbian code to work on the first attempt! This is a first... Maybe I'm actually getting the hang of this. I just find the whole "descriptor" concept very odd.

Anyway, all I was trying to do was replace all plus signs with spaces. I normally wrestle with descriptor nonsense for a while but this time, I got it on the first try!

_LIT(TestData, "THIS+IS+A+TEST");
HBufC* heapBuf = HBufC::NewLC(255);
*heapBuf = TestData;
TPtr pHeapBuf(heapBuf->Des());
while (heapBuf->Find(_L("+")) > 0)
{
pHeapBuf.Replace(heapBuf->Find(_L("+")), 1, _L(" "));
}
 
CleanupStack::PopAndDestroy(heapBuf); //Don't forget!

Bleh... stupid Symbian. Thank goodness I didn't have to change the length of the descriptor...

4Mar/092

Updates

So I've been sloppy again and not updating the site.

Without further ado:

Kindle 21) Safari Online was a little bit of a disappointment. I like the selection, the price is reasonable, the searchable formats are wonderful, the ability to cut and paste example code is stellar. So why disappointing? I don't use it. My reading is usually in the evening. I want to be able to sit back in the easy chair and read. My laptop is fairly comfortable but staring at a bright back-lit screen is most certainly not. It's just so much more comfortable to pick up a good old tree-based book and read that. Some of the advantages are still there. If I find something in my book, I can easily cut and paste it from Safari Online but now I'm basically just using Safari as a quick digital copy for all the books that I already have. Bleh... Not worth it. What would make it worth it? If the Kindle-gods worked with O'Reilly to make the entire Safari Online site browseable using your Kindle. I would buy it. I would pay extra. I would make a weekly pilgrimage to the Amazon headquarters. It would be great. But they don't. Furthermore, from what I've heard, on the software side, Kindle doesn't do tables, mono-space fonts, and some other things that really are  almost required in order to read a technical programming/development related book.

StackOverflow2) Work has been busy. C/C++ has been pretty minimal... I'm getting much  more comfortable with memory management issues and have been pleasantly surprised to see that most C++ code that I dig up out there basically looks like mine. I'm still definitely not an expert at decrypting some of the C++ deep magic code that I've seen, but then again, I bet the authors of most of that stuff don't even understand it anymore.  C# has been a mixed bag. I've really enjoyed getting into the "new" features of 3.0 and 3.5 which I had been neglecting until recently. A lot of time spent on Stack Overflow has helped get m e up to speed with Linq and some of the other fun new language features. Generators, extension methods, anonymous functions... it's all sorts of fun.

3) I've been able to watch as the value of various investments that I can't easily cash out of has continued to dwindle. Thankfully, much of what I did have invested in long-term investments I was able to move to much less volatile funds but it's still been rough. On the bright side, the end of the world may be near as the Mayan calendar has it set to 2012. Obama would have the rare privilege of being the final President and (also on the bright side) wouldn't have to worry about his legacy as no one would care how big the national budget is at that point. Also, this would save me a lot of frustration with the whole Social Security thing. One can only hope...

iTaliban4) Wife has been busy with her business. She's continued to embroider like crazy. I've been trying to push her to do more since she's only pregnant with 3 boys under 5 at home. 🙂 She tells me that some day she may expand her business but not now. I think she's in a good situation. On NPR (motto: Unbiased news since 1970 or whenever it was we started getting funded by liberals!)  there was an interview with a business owner in the same general "baby products" market. Her remark was that the "economic crisis" we're experiencing will likely drive a baby boom as people's lives and schedules slow down and more time is spent at home. But hopefully the economy picks up soon so they can afford overpriced baby products for their new brood. I got her a new iPhone so that she can become more of a geek. She really isn't nearly geeky enough and it bothers me. I was interested to see that even the Taliban are getting in on the iPhone action (see picture).

Time precludes further updates.

...Will write more later...

21Jan/091

Variable Naming

Some in computer programming have insisted on using the prefix is for all boolean data types. I've been bumping against this lately. I think it's silly. It's a form of Hungarian notation which seems unnecessary considering that the compiler/interpreter in almost all cases will help us deal with type issues. For readabilities' sake, wouldn't it make sense to name something what it represents? For example, if the boolean variable represents the state of being done, I suppose isDone may be an OK name. But if it represents the state of something that may or may not have been done 3 years ago, a better name might be wasDone. What if we want special checking to take place if a flag is set? Should that flag be called isCheck? It seems silly -- maybe shouldCheck would work. What if we're talking about ownership or class relationship. isChild works as a name to indicate class relation, but hasChildren is a perfectly logical name to define the inverse relationship. I saw a few places that advocate the use of helping verbs (have, has, had, do, does, did, shall, should, will, would, may, might, can, could, and must) or verbs of being (am, is, are, was, were, be, being, and been) to prefix these names. This makes sense. However, we speak English and sometimes in English we drop helping verbs (think of did for example). Is something.didSucceed better or worse than something.succeeded? There are numerous similar examples.

I think in the end, naming variables to make them readable is much more important than following some convention. Or perhaps I should rephrase that: the convention we ought to follow is the convention of written English, not some tightly defined arbitrary subset.

17Nov/082

Cell Phones

Well my new position has been keeping me busy writing software for Symbian, Nokia's primary phone operating system. I hate it. It's fun to learn new stuff and it's probably a good thing to be learning. But it's still awful. Documentation is terrible. Developer forums have lousy support. Two unique programming elements: descriptors and the cleanup stack just make life agonizing.

I've never appreciated C# so much.

On the bright side, it's drawn me back into C/C++ coding which I haven't done in years. That part is fun. It's funny how many things I take for granted with C#. I've even gotten lazier considering some of the wonderful upgrades to C#  3.0. For example:

List<string> list = new List<string> { "Susie", "Lucy", "Bobbie" };

This makes sense to me. It's easy and straightforward. It saves [development] time.

At work things have been interesting because I've been working with developers firmly set in an embedded mindset. They think in terms of saving bytes. My .NET programs take up 20MB of RAM just in basically displaying a simple window with a few controls. It bugs them.

I don't know -- I see the point of using assembly, C, even the horrible descriptors of Symbian for situations where you are highly concerned with efficiency. However, it does seem that you're going to be forced to spend more development time (by a huge amount) and the code will almost by necessity be much more difficult to maintain. When there's no clear single way to convert a descriptor to a char * every developer will do it differently and the code will be more and more complex and incomprehensible. It might run fast, but it's not flexible.

In the world of mobile development, optimization for speed seems important but if you take 6 months to update your application when new feature sets become available, your product likely isn't selling.

Along these lines of thought, I'm considering pouring a bit of time into iPhone development. I've always shuddered at Objective C but I need to bite the bullet and get into it. My assumption is that I'll be happy with it since from what I've heard it balances maintainable,  understandable and easy-to-write code with reasonable performance/stability.

I'll keep you posted.

6Apr/080

Design & Functionality

I have always been a stickler for functionality in my programming. What I seek to do is develop solutions through code that model existing efficient functionality or create entirely new abstract models that can be understood clearly and manipulated easily to achieve functionality.

I'm not a "GUI guy" and I have a hard time when I move from developing an easy-to-interface class library to an easy-to-use user interface. Basically, I seem to have no trouble with the idea of adequately describing objects (even abstract objects) and developing easy interfaces. However, displaying this information to a user is harder.

I'm a big fan of simplistic interfaces. I like my new virtual server provider and my new domain registrar because both sites are simplistic. They have well-defined functions and they present the information in easy-to-understand lists. You don't have to grasp some complex object model or understand what the difference is between clicking on "My Account" and "My Hosting" or some bizarre thing like that. It just makes sense.

Now, at the same time, both these sites (and I love them dearly) are rather ugly. I myself don't mind this at all. They're functional and they feel right -- like a solid metal tool in my hand, it doesn't look pretty but I thoroughly enjoy using it.

The intersection of functional code and beauty to me is pure happiness. However, beauty is, or at least is often regarded as being in the eye of the beholder. And on the Internet, there's a lot of beholders.

I'm reminded of the site CSS Zen Garden; it allows you to view the same material using many different style sheets. There's some beautiful graphics and layouts but at the same time the actual content never changes (you're simply switching stylesheets). I like this a lot. Beautiful websites are great but beautiful websites where the presentation is perfectly separable from the content are wonderful. I know this isn't really "functionality" but it allows the opportunity for such. With this concept, you can develop extremely functional content and then alter the stylesheet to present that functionality in a myriad of ways.

I actually did something like this (but very simplistically) for our family website. I have a stylesheet for each month and every month visitors are presented with a different stylesheet by default. It helps keep the site from feeling old and boring to myself and to others. The functionality is always the same (very plain-vanilla WordPress functionality)

There's a lot of this sort of thing happening on the Internet and there are plenty of good, clean websites with very solid and well-thought-out designs that provide excellent functionality. But it's not really the norm I wouldn't say.

My new iPhone to me is an example of combining functionality with beauty. It's not quite as functional as I would like but it's much better than what I've had in the past. The interface and presentation of material though is absolutely wonderful (like much of what Apple makes).

I guess my rambling point is that popular success seems to lie at the intersection of functionality and beauty. Allow users to aid in defining beauty (through open and customizable interfaces) and you've added even more value.

I'm revving up to produce some new web applications (and possibly an iPhone app if I can find the time). I think my biggest issue is that although I feel confident making functional applications and making them have customizable interfaces, I'm pretty lousy at developing anything more than the most simplistic presentation. I've picked up some books on design so maybe I'll actually get better at it. We'll see... more to follow as I pursue this.

24Mar/083

Easter Fun!

So -- on what date does Easter fall? Now YOU can impress your friends by using either of the Perl scripts below... Thanks to this site which supplied both Butcher's and Oudin's algorithm:

Butcher's Algorithm in Perl (1876 -- and the Perl code is almost that old too! 🙂 )

sub GetEasterDate {
my($year)=@_;
my $a=$year%19;
my $b=int($year/100);
my $c=$year%100;
my $d=int($b/4);
my $e=$b%4;
my $f=int(($b+8)/25);
my $g=int(($b-$f+1)/3);
my $h=(19*$a+$b-$d-$g+15)%30;
my $i=int($c/4);
my $k=$c%4;
my $l=(32+2*$e+2*$i-$h-$k)%7;
my $m=int(($a+11*$h+22*$l)/451);
my $month=int(($h+$l-7*$m+114)/31);
my $p=($h+$l-7*$m+114)%31;
my $day=$p+1;
return($month."/".$day."/".$year."\n");
};

Oudin's Method in Perl (1940)

sub GetEasterDate {
my($year)=@_;
my $century = int($year / 100);
my $G = $year % 19;
my $K = int(($century - 17) / 25);
my $I = ($century - int($century / 4) - int(($century - $K) / 3) + 19 * $G + 15) % 30;
my $I = $I - (int($I / 28)) * (1 - int($I / 28) * int(29 / ($I + 1)) * int((21 - $G) / 11));
my $J = ($year + int($year / 4) + $I + 2 - $century + int($century / 4)) % 7;
my $L = $I - $J;
my $month = 3 + int(($L + 40) / 44);
my $day = $L + 28 - 31 * int($month / 4);
return($month."/".$day."/".$year."\n");
};

The second algorithm is more efficient. If I run 100,000 years of calculations I get about .035ms better performance over the "older" method.

For those of us who just "want the facts", here you go:

  • 4/12/2009
  • 4/4/2010
  • 4/24/2011
  • 4/8/2012
  • 3/31/2013
  • 4/20/2014
  • 4/5/2015
  • 3/27/2016
  • 4/16/2017
  • 4/1/2018
  • 4/21/2019
  • 4/12/2020
  • 4/4/2021
  • 4/17/2022
  • 4/9/2023
  • 3/31/2024
  • 4/20/2025
  • 4/5/2026
  • 3/28/2027
  • 4/16/2028
  • 4/1/2029
  • 4/21/2030
  • 4/13/2031
  • 3/28/2032
  • 4/17/2033
  • 4/9/2034
  • 3/25/2035
  • 4/13/2036
  • 4/5/2037
  • 4/25/2038
  • 4/10/2039
  • 4/1/2040
  • 4/21/2041
  • 4/6/2042
  • 3/29/2043
  • 4/17/2044
  • 4/9/2045
  • 3/25/2046
  • 4/14/2047
  • 4/5/2048
  • 4/18/2049

Oh, and if you're looking for a pattern, there is one: Every 5,700,000 years.

Thanks to yesterday's Slashdot post for getting me interested in this.

22Jan/082

Computer Science vs. Programming

There have been a few Slashdot submissions here and here. They're concerned with an article published by two professors from NYU that assert that Java (and similar high-level languages) are damaging to teach as the "first language" of a Computer Science education. Since I wasn't a real CS major, I'm perhaps a little outside of this discussion. However, I cut my teeth on C/C++ at school before moving on to the high-level languages (really just .NET and some very high-level languages like Ruby and Python).

I tend to agree with the conclusions. It's not that there's no place for Java. It's just that without the fundamentals of pointers, memory management, and basic understanding of the construction of complex data structures which are just handed to you with Java or .NET, it's very difficult to fully comprehend what you're doing

I had a very good professor that taught algorithms and data structures at school and although at the time, the experience was painful, I'm sure it has helped immensely. Despite my affection for things like Ruby on Rails which is extremely high level, I'm annoyed sometimes because of the indeterminacy of functions and the vagueness of the specifications. When you write a language that can do powerful things in one line of code, you're taking a lot of shortcuts and it can be surprising when a function returns something very unlike what you expected do the complexity of the underlying code. Basically, you ignore things like sorting algorithms entirely in favor of the "built-in" sort routine. How does it work? Well, you can dig it up in the code, but most people will simply use it and assume that it's the fastest for all of their needs. What happens is that writing code becomes an assembling of pre-built components. It reminds me of "building" Ikea furniture. Granted it takes a certain amount of handiness to put together your new desk but you're not gaining skills that you can use to build anything yourself without first being handed the pre-built pieces.

I tend to think of myself as primarily a Software Engineer. I'm not just a programmer because I do a lot more than write code. But I'm also not much of a Computer Scientist because I spend very little time actually attempting to improve upon techniques and mechanics of processing information. These definitions are a little vague, but I feel that Software Engineering is more what I do because I apply creativity to the process. I think one can be a Computer Scientist and a Software Engineer but I don't think my work normally falls into both categories. I've always found the role of a traditional Architect to be similar to Software Engineering. It's an application of creativity (design, color, texture, material, etc.) to a field of science (physics) that results in [hopefully] useful buildings. There are some "cutting edge" Architects that attempt new and innovative projects but most Architects are working with existing ideas and applying them creatively.

I've heard that Frank Lloyd Wright's buildings although amazing in appearance and remarkable in their artistic qualities are often problematic in simple ways. Flooding basements, leaking roofs, etc. were the result of a poor implementation of a great and artistic idea. It's not enough to be artistic and creative; a good system like a good building works and functions as it should in addition to its aesthetic qualities (which make it unique).

I've always seen this distinction between implementation-focused approaches and theory-focused approaches. Implementation is desirable for the production of new applications and system but will always be held back by advances in theory. It seems that Computer Science has largely lost its way in North American schools by focusing too much on implementation without teaching theory. Programmers are cheap. It doesn't take a lot of brains to assembly code from pre-built components and creativity often is the only difference between a good programmer and a mediocre or poor programmer. Without new advances in theory, applications and systems will simply have to stand on their desirability of implementation (i.e. how easy is to use?). New ideas must be infused into the process for real advances to be made.

The use of so-called AJAX seems an interesting example. The ability to use things like the XMLHttpRequest object were available for quite some time before companies like Google began using it to do amazing things. This is entirely focused on implementation. Web 2.0 applications (whose primary distinction seems to be AJAX technology) are an innovation in implementation only. Many "hard-core" programmers find the terribly sloppy and inefficient results that often result less than satisfying. It does cool things but isn't there a Better Way? I use AJAX quite a bit these days and it's handy. However, I have only a bare understanding of how it works and what might be a better design. I don't tend to concern myself with the next evolution of the Internet-- I focus on building things that work with the technologies that now exist. But AJAX really isn't a huge advance -- in fact its "magic" often results in massive security holes, odd and unpredictable behavior, and hugely increased server overhead.

At the same time, a Software Engineer who truly understands the science of the code that he writes is likely to make far fewer mistakes and write much more efficient code. Even without much creativity, a programmer who can optimize code is a desirable catch for any software company. I think that everyone should understand the underlying details of code even if some end up focusing on the creative, implementation focused approach or the theoretically, algorithmic approach.

I mentioned security in regards to AJAX and this seems important. It's well and good to provide applications that do the same things in easier ways but without a strong cadre of Computer Scientists, developing faster, more secure, and more reliable ways of doing business, we end up with applications that are never properly tested (it's difficult to test code that just does magical things!) and never adequately secured.

A little rambling of a post -- hopefully I've managed to convey something. Your comments welcome.

11Jan/081

Happiness is Ruby on Rails

Ruby on Rails is such a beautiful thing... Here's some view (presentation) code that I slapped together in about 2 minutes:

  <% Category.find(:all).each { |category| %>
    <div id=<%= category.cssid %>>
      <h2><%= category.Name %></h2>
      <ul>
        <% category.items.find(:all).each { |item| %>
        <li><%= item.ShortDescription %></li>
      <% } %>
      </ul>
    </div>
  <% } %>

Basically, it let's me easily list out categories and then items within those categories on a webpage (with per-category styling supported). It's interleaved with HTML and produces very slick output. It just feel so natural and that's what I love about it.