Jaroslav Tulach discusses the relationship between MVC and the NetBeans Platform and explains why the DCI approach offers a better understanding.
Sang Shin’s online Java courses are very informative and are well paced. I highly recommend attending the free online courses even if you think you know all there is to know about Java.
The next and the 16th session will start on Sep. 1st, 2009. From this session, this course covers basic topics while “Java EE programming Advanced (with Passion!)” course covers more advanced topics. Just like other online courses I teach, this course is offered online only. For those of you who are not sure what it’s like to take this course online, please see What it’s like to take Sang Shin’s online courses. Just to set the expectation correctly, there is no real-time webcasting.
Did I mention the course is FREE? If you ever wanted to learn Enterprise Java, now is the time to sign up!
The following is MY perception of Sybase’s PowerBuilder:
Years ago PowerBuilder was king. No one could touch it. It was relatively inexpensive. Microsoft’s Visual Basic matured and the Pascal based Borland’s Delphi was released. Then it fell and fall it did.
As it was falling from the throne Sybase purchased Powersoft, makers of PowerBuilder. As the the market share continued to shrink, PowerBuilder developers had more difficulty in finding new projects. Most new development was written in Visual Basic or Java.
Years went by with marketing of PowerBuilder little more than the occasional road show, TechWave presentations and the ISUG Technical Journal ads catered towards existing customers. Little to no effort was put forth by Sybase to gain new PowerBuilder customers.
During this week’s Sybase TechWave, PowerBuilder version 12 was released. It has all the whistles and kitchen sinks you could ask for. An amazing tool for development! Too bad no one outside of the die hard PowerBuilder programmers will use it.
Sybase owns PowerBuilder. It owns the PowerBuilder software, PowerBuilder language, PowerScript, the PowerBuilder vm, and everything PowerBuilder.
No problem right?
What will happen to PowerBuilder when Sybase is bought out by another company? Products with tiny market share like PowerBuilder would likely be killed or in a state of limbo for several years. Anyone remember what happened when IBM bought Informix?
Do you really want to bet your career and business on a software development tool that is locked to a single smallish vendor?
Maybe, perhaps, if Sybase were to release the PowerBuilder 4GL language and PowerScript to the world like Microsoft did with the C# and Visual Basic languages and Sun with Java… Perhaps if Sybase would allow 3rd parties to develop tools based around the PowerBuilder language royalty free…
Sybase: PLEASE FREE THE POWERBUILDER 4GL LANGUAGE!
I mean, really, what benefit could Sybase have to cripple the PowerBuilder developers?
Two of my best friends, Ryan & Anna Lubke, just went completely green. No, not like the Incredible Hulk, but as in going off of the electric grid in California. Due to silly laws in Chicago, my family can’t do much to go green.
Thursday Jan 01, 2009
I’m not one to normally post much of anything personal on the web, however, since several co-workers found my family’s new situation interesting, I thought I might share.
This year my family and I are going green(er). Specificially, we’ve moved into a house that is off the grid meaning the house doesn’t use the typical public utilities (i.e. city water, electric, etc).
When T V S Murty asked on the sybase-l mailing list about Sybase ASE, multicores and Sybase licensing, the discussion quickly drilled down to whether or not multicores were beneficial to Sybase ASE and database software in general. Jeff Tallman, of Sybase fame, described in detail how Sybase ASE and multicore processors relate to each other.
From: Jeff Tallman <
Subject: [sybase-l] – RE: Multicore processors and ASE>
As always a lot depends on the application profile. Something to consider for any multicore processor are factors:
- The number of FPU units per chip (FPU = Floating Point Unit)
- The number and capacity (in IOPS) of IO processors per chip
- The type of chip multi-threading
With respect to #1, most DBMS (at least the commercial ones) use statistics for query optimization – so while the actual query processing doesn’t use a lot of FPU instructions (assuming a minimum of float datatypes, etc.). Each query requires a pretty good smack of the FPU time to do the floating point math on the stats. The impact of this could be lessened by doing statement caching or fully prepared statements…or other means at reducing the optimizer load.
The second problem is one of capacity vs. bandwidth. All network and disk IO obviously need to use the IO processor. With 4 dual core chips, usually, you have 4 IO processors.
With a single chip with 8 cores, it is likely that you will have only a single IO processor. The single IO processor has 8 cores all making requests. The number of IO operations per second it can handle becomes a real key factor in the box’s scalability.
The chip multi-threading is an interesting issue as there are ~3 different flavors today:
- Intel’s Hyperthreading (no longer implemented on XEON and I don’t think implemented at all anymore)
- Sun’s Chip Multi-Threading (CMT)
- IBM’s SMT
Some instructions require multiple cycles to complete due to they are waiting on a fetch from main memory or whatever. The thread/process of execution typically blocks in these cases, resulting in a fairly idle core. By making use of this idle time, CMT or SMT can increase the throughput overall — ignoring HT as it was fairly ineffective at this – and appears to have been dropped by Intel lately.
The question that comes up is how do you manage the threading? Do you do a form of timeslicing (i.e. when you suspend on process that is blocked on a call, do you let the one that replaced it run for a certain length of time or until it blocks before returning back to the original) or do you do an interrupt based/preemptive mechanism in which when the blocked call returns, that you suspend the other thread? Both have advantages and disadvantages, and do allow more engines than cores.
However, it may also mean tuning ASE to be more reactive, such as reducing the ‘runnable process search count’. You also need to be careful that engines running on CMT’s don’t get woken back up on another core (especially if the L2 cache is split between the cores) as well as other considerations.
A rule of thumb to think about is that if you have a multi-core CPU that supports chip threading, if you have a lengthy list of SPIDs in a ‘runnable’ state, enabling extra engines on the threads will likely help. If you don’t – i.e. you are IO bound – that it probably won’t help.
Currently, Sun uses a timeslicing mechanism that is more along the lines of ASE’s SPID management – and as a consequence, it shows scalability when the various tasks do a lot of blocking calls such as fetches from main memory. It does have the detrimental effect of only providing a percentage of cpu time to the ASE engine (i.e. 25% when 4 threads per core). The more parallelism is used within your application, such as higher numbers of concurrent users in ASE, the more it can be distributed across the engines.
You have to be careful as net engine affinity and short query’s (i.e. DML). They can have a negative impact, which may be controllable using engine groups. Overall, a cpu-intensive/cpu bound application can benefit from the Sun CMT implementation. An IO bound application does not.