Dctr Watson explains how:
Back in December of 2007, Andrew E. Bruno wrote an excellent article on how to use the open source program Dia to design databases. He used the resultant design to create a MySQL database but the same principles could be used to create just about any relational database from the design. Dia is available for Linux, *nix, and Windows.
In this post I’m going to give a quick how-to on creating database schemas with a wonderful tool called Dia. I’ve often found having a nice visual representation of a database to be quite helpful but can’t stand keeping it up to date. As soon as you add a new column or change the design around you end up having to sync your visual diagram with your SQL files. It’s tedious having to manage the various SQL for building the database and this can be a larger pain when trying to support different database platforms each with their own SQL syntax. So before you create your next database read on and see how Dia can make your life a bit easier.
Dia is a program for creating diagrams and for this exercise we’ll be creating UML diagrams from within Dia. We’re also going to use a perl script called tedia2sql which will transform our Dia files directly to SQL for our target database. What’s also nice about creating database schemas this way is that you can generate SQL for multiple target databases without the maintenance overhead.
Once or twice a year I get to work with an excellent DBA pal from Illinois called Jason Froebe who describes himself as a, “Perlmonger capable of speaking fluent munchkin.” His personal blog is called Ramblings of a Geek, but I keep telling him he should rename it “Froebe’s Frontal Lobe”.
The work that Adrian mentions is a last minute article for the International Sybase User’s Group that I wrote and he edited. The benchmarking article that many people are waiting for is being held up by several groups giving their approval for the release of the material.
As always, it is a real pleasure to work with Adrian 🙂
The latest craze from software vendors to companies is to charge for each and every core a machine has regardless of whether or not you’re going to use it.
Get this, if you want to buy a production license for your database/middleware/web server, the vendor (starts with an “S”) wants you to send them the hardware specs of the box. If you tell them it is a Dell superduper server with 8 quad core CPUs and 96GBytes of RAM but you only will be using a single core for the database and devoting the rest to the middleware/webserver, you STILL have to pay the vendor for all 32 cores (8 CPUs X 4 cores). Your software license costs is now 32 times MORE what you should have to pay IMHO.
Lots of software companies are now doing this anti-customer practice just to beef up their short term revenues.
What makes them think that you won’t go to another vendor?
- Their competitors are probably doing the same sales tactic
- They have you by the family jewels, vendor lock in, and it will cost you far too much $$$ to migrate
- They think that you’re too stupid and/or timid to call their bluff
- FUD that is spread by well meaning and well known folk that don’t know jack about the open source alternatives
Who the hell do they think they are?
MyDatabases Volume 1 Issue 1 (July 2008)
Geek Spotlight: Michael Peppler
Sybase ASE on Ubuntu 8.04
Logical Process Manager
I know you can simulate it by creating a temp table with an identity column, delete old rows, get the max, etc but still… PostgreSQL’s create sequence would be very handy… btw, Oracle has it too
just my opinion… grrrr
I’ve created ISUG enhancement request # 3449 for this.
create function ok (@boolean bit) returns varchar(2000) as declare @output_string varchar(2000) select @output_string = case when @boolean = 1 then "" else "not " end set @output_string = @output_string + "ok" return @output_string end
Msg 156, Level 15, State 2
Server ‘DBADEV1’, Procedure ‘ok’, Line 7
Incorrect syntax near the keyword ‘end’.
I’m trying to write an ASE implementation of TAP. A rudimentary TAP implementation for PostgreSQL is at http://www.justatheory.com/computers/databases/postgresql/introducing_pgtap.html
I’ve been able to use case in SQL UDFs before – see http://froebe.net/blog/2007/10/10/porting-mysqls-date_format-function-to-sybase-ase-1502/
Adaptive Server Enterprise/15.0.2/EBF 15654 ESD#4/P/Linux Intel/Linux 2.4.21-47.ELsmp i686/ase1502/2528/32-bit/FBO/Sat Apr 5 05:18:42 2008
I’m just missing something blindingly simple… I just know it.
What I would love to see is native support for SQLite. http://www.sqlite.org
It is the most installed embedded database on the planet hands down. Don’t believe me? You know that Firefox web browser, Thunderbird newsreader, most Adobe products, Miro, etc all have it embedded? – reference http://www.sqlite.org/mostdeployed.html
We use it extensively at work as:
- staging for mass data imports/exports/conversions
- local application ‘cache’ for large data sets
- projects that don’t require all the features of Sybase ASE (or Oracle for that matter)
One of the best features is that the database itself in platform independent… copy the db on to AIX from your Windows box … then on to your old Amiga … then on to your windows mobile device. Getting the point? 🙂
I really wishwould make it so that the Sybase ASE databases were truly platform and character set/sort order independent… but that is in another dream 😉
Not a blurb in the PowerDesigner/PowerBuilder manuals or anything
When T V S Murty asked on the sybase-l mailing list about Sybase ASE, multicores and Sybase licensing, the discussion quickly drilled down to whether or not multicores were beneficial to Sybase ASE and database software in general. Jeff Tallman, of Sybase fame, described in detail how Sybase ASE and multicore processors relate to each other.
From: Jeff Tallman <
Subject: [sybase-l] – RE: Multicore processors and ASE>
As always a lot depends on the application profile. Something to consider for any multicore processor are factors:
- The number of FPU units per chip (FPU = Floating Point Unit)
- The number and capacity (in IOPS) of IO processors per chip
- The type of chip multi-threading
With respect to #1, most DBMS (at least the commercial ones) use statistics for query optimization – so while the actual query processing doesn’t use a lot of FPU instructions (assuming a minimum of float datatypes, etc.). Each query requires a pretty good smack of the FPU time to do the floating point math on the stats. The impact of this could be lessened by doing statement caching or fully prepared statements…or other means at reducing the optimizer load.
The second problem is one of capacity vs. bandwidth. All network and disk IO obviously need to use the IO processor. With 4 dual core chips, usually, you have 4 IO processors.
With a single chip with 8 cores, it is likely that you will have only a single IO processor. The single IO processor has 8 cores all making requests. The number of IO operations per second it can handle becomes a real key factor in the box’s scalability.
The chip multi-threading is an interesting issue as there are ~3 different flavors today:
- Intel’s Hyperthreading (no longer implemented on XEON and I don’t think implemented at all anymore)
- Sun’s Chip Multi-Threading (CMT)
- IBM’s SMT
Some instructions require multiple cycles to complete due to they are waiting on a fetch from main memory or whatever. The thread/process of execution typically blocks in these cases, resulting in a fairly idle core. By making use of this idle time, CMT or SMT can increase the throughput overall — ignoring HT as it was fairly ineffective at this – and appears to have been dropped by Intel lately.
The question that comes up is how do you manage the threading? Do you do a form of timeslicing (i.e. when you suspend on process that is blocked on a call, do you let the one that replaced it run for a certain length of time or until it blocks before returning back to the original) or do you do an interrupt based/preemptive mechanism in which when the blocked call returns, that you suspend the other thread? Both have advantages and disadvantages, and do allow more engines than cores.
However, it may also mean tuning ASE to be more reactive, such as reducing the ‘runnable process search count’. You also need to be careful that engines running on CMT’s don’t get woken back up on another core (especially if the L2 cache is split between the cores) as well as other considerations.
A rule of thumb to think about is that if you have a multi-core CPU that supports chip threading, if you have a lengthy list of SPIDs in a ‘runnable’ state, enabling extra engines on the threads will likely help. If you don’t – i.e. you are IO bound – that it probably won’t help.
Currently, Sun uses a timeslicing mechanism that is more along the lines of ASE’s SPID management – and as a consequence, it shows scalability when the various tasks do a lot of blocking calls such as fetches from main memory. It does have the detrimental effect of only providing a percentage of cpu time to the ASE engine (i.e. 25% when 4 threads per core). The more parallelism is used within your application, such as higher numbers of concurrent users in ASE, the more it can be distributed across the engines.
You have to be careful as net engine affinity and short query’s (i.e. DML). They can have a negative impact, which may be controllable using engine groups. Overall, a cpu-intensive/cpu bound application can benefit from the Sun CMT implementation. An IO bound application does not.
As many of you know, I’ve been working on a free magazine regarding various database systems (dbms) called My Databases. I hope to have multiple authors in future issues covering all sorts of open source and proprietary databases.