Sunday, October 28. 2007
Explain Analyze Geometry Relation ... Posted by Regina Obe in database standards, postgis postgresql at 04:18
Explain Analyze Geometry Relation Operators and Joins Except Where?
Geometry Operators in Joins vs. WHERE clause
I've noticed that most people when they do queries in PostGIS (I presume other spatial databases as well), seem to put all there geometry relation (intersects, contains etc.) checks in the WHERE clause instead of the FROM clause of their SQL statements whereas I tend to do the opposite.
I've always wondered if there is a speed advantage of doing it one way or the other so I decided to look at the 2 EXPLAIN ANALYZE plans in pgAdmin for these two sample queries.
As I expected, the plans are identical and look like this.
I ran for a couple of other types of queries and got the same conclusions. Even for compound statements like the below, the explain analyze plans were identical and the timings if I run for enough iterations come out on average the same.
So the question is why is there a preference for putting these things in the WHERE and why do I prefer JOIN?
If you think about it, JOIN is kind of a weird concept that appeals mostly to database geeks - whereas WHERE is something most people deal with every day. WHERE is more intuitive; Also let us not forget the nonconformist empire of Oracle and how in the olden days of Oracle, JOINS were done in the WHERE clause with things like =+ = += etc., to do LEFT INNER RIGHT FULL JOINS and I suspect a lot of Oracle Database users still do that even though it violates the ANSI SQL Standard.
Now why do I prefer JOIN over WHERE for this kind of thing
Why ever ask WHERE?
Now while I do tend to make sure that I have at least one JOIN condition for each of my tables, I also use WHERE? If you can do LEFT RIGHT INNER with JOINs and WHERE can only simulate INNER JOINS, why is WHERE ever important?
Well I can think of several cases where I would use WHERE in conjunction with JOIN, some cases where I arbitrarily choose JOIN and no WHERE out of habit, and some where I just have a WHERE such as when I have a VIEW so I put the JOIN in the view and a where in my query against the view or I have a single table or my audience doesn't understand the concept of JOIN so its easier to just use WHERE. Below is an example where WHERE is a useful thing and can't be replaced with just a JOIN.EXCEPTION Queries: Give me a list of all towns that have no kindergartens
It is really quite hard to answer the above question without a WHERE and using a NOT IN(subselect) or EXCEPT clause to avoid a JOIN is in general slower in DBMSs I have tried. Not in all cases though. As shown later below my NOT IN for this particular case beats out the LEFT 2 out of 3 times. This could have more to do with the fact that the planner has an easier time optimizing geometric INNER JOINS than LEFT JOINS.What the above query is basically doing is
Now some may ask -
Its kind of strange that in most DBMSs I have tried, doing a left is in general faster than doing a NOT IN or EXCEPT. My guess for this reason is a LEFT takes in general less memory and sorting power to process. This varies depending on datasets and the properties of your dataset.
Think about the case of if the two sets of geometries intersect I can immediately throw out those results because looking forward I know I really don't care about towns that have kindergartens so as soon as I see such a thing - I throw it away. So while I need to process it, I immediately throw this information away. In fact think about it - I never asked what schools it has - so as soon as I see a town with a kindergarten - I could care less about it. I never have to consider this town again or join it or order it with my other towns. I do not need to remember it.
If I do a NOT IN or EXCEPT I first have to ask which towns have kindergartens and then which towns are not in the set of towns that have kindergartens. MORE MEMORY and processor needed for sorting in general because I need to know the towns that have kindergartens to compare with my full town set to know the towns that don't have kindergartens. Well its not quite that simple for NOT IN and EXCEPT, but almost because the planner sees these as separate distinct questions instead of in the case with the LEFT WHERE where it perceives that as one question.
Below is the same question - asked with a NOT IN. Note: We are asking who is in the list (our inner question) and who is not in the list that we just generated.
Same question with EXCEPT - return all towns except those that have kindergartens
One reason why it takes less memory to do a LEFT JOIN is what makes database programming a little harder to understand than standard procedural logic. Remember I stressed the idea of concepts verses reality. Sure the idea of a school that doesn't actually exist is kind of silly but its a useful model and allows us to do one very important thing - IRRADICATE INCONVENIENT EXCEPTIONS. All my towns now have kindergartens (sort of) so I am no longer asking a question I have no data for. The fact that some of these kindergartens are imaginary and that I am now on the hunt for towns with imaginary kindergartens is inconsequential. I have irradicated this inconvenient exception that forces me to ask, Which towns have kindergartens, a question I could care less about.
Although JOINS happen conceptually before WHERE, in reality they don't need to as long as the planner can guarantee the process of solving things out of order mimicks the conceptual model. In reality they never need to happen at all if reality can mimick concept.
For example if you had in your WHERE clause
Its actually a wonderful hack that you can state a problem, see annoying exceptions to the rule and quickly concoct conceptual models that destroy these nasty exceptions so you can treat everything the same.
Saturday, October 20. 2007
Database Information Schema Catalog, ... Posted by Regina Obe in database standards, postgis postgresql at 00:00
Database Information Schema Catalog, NULLS, and Code Generation
I love the idea of code generation without using IDEs just because I know if my IDEs of choice are not present at a particular moment, I've got another swiss army knife to rely on. I particularly like SQL code generation schemes that rely on a database to do it. When you are trying to generate SQL code, you often need the metadata of your table structures to do so and using the database to get that info is often the easiest approach.
This post is in the spirit of Hubert Lubaczewski's recent PostgreSQL post grant XXX on * ? which details how to grant access to tables in PostgreSQL using some SQL code generation tricks. As it turns out I had a similar situation I had to deal with recently, but involved updating fields to null.
Empty String verses Null
Now there have been long debates about when to use Null vs. Empty string and the various Gotchas involved. If you want to know about these, this series on Nulls What if null if null is null null null is null? by Hugo Kornelis is a pretty good one. Hugo is predominately a Microsoft SQL Server blogger, but for the most part how null is handled is pretty much the same across most pseudo ANSI standard relational databases. So regardless of your database poisons of choice, his comments have equal merit.
I generally prefer the use of Null instead of empty string, all gotchas aside, mostly for philosophical reasons; I believe that the absence of data should be a black hole and that is what NULL is. When you do a lot of statistical and financial reporting as we do, its immensely useful, but you have to be cognizant of the difference between 0, empty string and NULL and that it is used consistently within your database. There is one other reason I prefer NULL over using an empty string and that is that NULL can be cast to any data type because it represents the existence of nothing where as an empty string can not because an empty string is a string. This gets me to my particular dilemma.
Updating Empty String to NULL
Often times when I get property parcel data they send it to me as all varchar or some such thing and they are always changing the field names on me (the evils of DBF/ESRI shape as a transport mechanism) and I then have to massage this data into my superbly structured tables where a number is a number and can be tabulated without casting. Now take the case if you have say land property's assessed value or some other numeric field that comes as varchar and you need to stuff it into an integer or float (double precision) field and a lot of these fields come thru as empty string.
You get an error like this - ERROR: invalid input syntax for double precision: "" in PostgreSQL (or SQL Server) when you do something like.
But if you do something like
So the way I get around this unpleasantry is to set all varchar fields that are empty to null before I try to insert it into my final table structure. You can imagine this gets pretty repetitive if you have to do this for say 20 fields. So here is my trick to generate the SQL to do this in PostgreSQL.
SELECT 'UPDATE ' || table_name || ' SET ' || column_name || ' = NULL WHERE ' || column_name || ' = '''';' As sqlupdate FROM information_schema.columns WHERE table_name = 'sometable' AND data_type LIKE '%char%'
The above will return a row for each column in your table that is a character varying or char field and will contain the update statement to update all empty strings in that column to NULL. If you want to go one better, you can create a custom aggregate for strings (as I mentioned in More generate series) and use it to get a single row containing all your update statements.
Note that this same trick works in other databases that support the ISO-SQL:1999+ information_schema such as (SQL Server 2005, SQL Server 2000, MySQL 5, PostgreSQL 7.4 +) - (sadly Oracle apparently doesn't support information_schema - perhaps someday. :)).In SQL Server 2000/2005 you would replace the || with + so your code would look like
SELECT 'UPDATE ' + table_name + ' SET ' + column_name + ' = NULL WHERE ' + column_name + ' = '''';' As sqlupdate FROM information_schema.columns WHERE table_name = 'sometable' AND data_type LIKE '%char%'
If you want to have the above return a single row in SQL Server 2005, similarly you would create an aggregate function for strings as described here - granted a bit more involved than doing the same in PostgreSQL. Or use SQL Server's XPath syntax supported in SQL Server 2005 as described here.
Note that if you are making heavy use of database schemas, then you will need to qualify your tables with the schema name and do a where on the table_schema as well.
SELECT 'UPDATE ' || table_schema || '.' || table_name || ' SET ' || column_name || ' = NULL WHERE ' || column_name || ' = '''';' As sqlupdate FROM information_schema.columns WHERE table_name = 'sometable' AND data_type LIKE '%char%' AND table_schema = 'assessing';
Explore the INFORMATION_SCHEMA
The INFORMATION_SCHEMA is chuck-full of all sorts of useful metadata about your database objects. While some tables may not exist in some database management systems (DBMS) (e.g. PostgreSQL has a table information_schema.sequences (defined in SQL:2003 standard which you won't find in MYSQL and SQL Server because those databases don't have sequence objects), for the tables that exist, the tables are consistently named across all DBMS's that support them and so are the field names.
There are 3 tables (views) I find most useful in the information schema. These are all available in the DBMS's I mentioned, and those are
Note - for the most part if not in all cases, the INFORMATION_SCHEMA set of information are pretty much implemented as views on top of the proprietary DBMS system tables. The reasons I find most compelling to use them instead of using the direct system tables are the following
Friday, October 12. 2007
Map Dicing and other stuff Posted by Regina Obe in generate_series, microsoft, postgis postgresql at 00:13
Map Dicing and other stuff
Map Crushers and Silverlight
My husband, Leo, and I went to the Re-MIX Boston conference recently. They have posted some videos of some of the Boston Re-Mix sessions and more are coming in case anyone wants to see them.
On the GIS Microsoft Virtual Earth FrontI attended the Virtual Earth presentation. I didn't get too much out of it that I didn't know already except for 2 points.
On the Silverlight front
Map Dicing in Spatial Databases: PostGIS Example
The Microsoft Virtual Earth presentation did get me thinking about map dicing in spatial databases. Normally when I get maps, I get them on a silver platter - already at the lowest granularity I need. But on rare occasions I would have liked to dice up the data more.
Why would you want to do this in a spatial database - particularly PostGIS? For one - the more granular your data, the more generally useful your spatial indexes since they more closely mirror your actual data and also it is speedier for statistical aggregation and doing thematic maps when spatial joining with bigger pieces if you know that your smaller pieces can be fully contained in your larger area of interest. To a point though - at a certain point the over-head of the additional records counteracts the benefits of more useful indexing and also your indexes just become less useful for other reasons.
Anyrate, the approach I am about to show may not be the best since its something I thought up in my sleep.
The basic approach I would use to dice up say US state boundaries or Massachusetts towns or say census blocks into smaller rectangular quadrants - would be to first create a grid of the extent of the area broken out in even rectangles of x width and y height and then do an intersection with my map of interest to get a new diced map. I will not only dice space - but I shall also pseudo-dice attribute data. I say pseudo because I am going to assume that things like population density etc. are even across each town boundary which we know is not true. For data such as census blocks and depending how low you dice, this assumption may not be so inaccurate.
For those not familiar with the wonders of spatial intersection - take a peak at our explanation of ST_Intersection and ST_Intersects complete with sample diagrams.
This technique uses OGC standard functions so should work as well with slight variation in syntax in other spatial databases such as Oracle Spatial, IBM DB Spatial Extender, MSSQL Spatial etc. The only part not OGC compliant is my use of the PostgreSQL specific function generate_series, but this can be easily simulated in other databases by creating a dummy table of numbers say from 0 to 10000.
The following SQL calls demonstrate this approach. I would be interested in learning how real GIS people do this kind of thing.
Step 1 - Get the data of interest and load it - for this I will be using TOWNSSURVEY_POLYM from MassGIS.
Step 3 - Make our throw away grid - alas we found a good use for a cartesian product
In case it is not quite clear to folks what I am doing here - here it is in simple english
Create a reference box starting at the origin of our extent of massachusetts that is of dimension 1485x911 meters - in quasi OGC notation - BOX(xorigin yorigin, (xorigin + 1485) (yorigin + 911)) Next take this box and use it as a paint brush to paint across and then down by translating it hor.n*1485, ver.n*911
Step 4 - Create our final dataset - the towns nicely diced up into smaller rectangler shapes
CREATE TABLE towns_grid ( gid serial NOT NULL PRIMARY KEY, objectid integer, town character varying(21), town_id smallint, pop1980 integer, pop1990 integer, pop2000 integer, popch80_90 smallint, popch90_00 integer, "type" character varying(2), fourcolor smallint, fips_stco integer, sum_acres double precision, sum_square double precision ); SELECT AddGeometryColumn('public', 'towns_grid', 'the_geom', 26986, 'MULTIPOLYGON', 2); --Query returned successfully: 22713 rows affected, 1273117 ms execution time. INSERT INTO towns_grid(objectid, town, town_id, pop1980, pop1990, pop2000, popch80_90, popch90_00, "type", fourcolor, fips_stco, sum_acres, sum_square, the_geom) SELECT objectid, town, town_id, pop1980*ST_Area(ST_Intersection(t.the_geom, tg.the_geom))/ST_Area(t.the_geom), pop1990*ST_Area(ST_Intersection(t.the_geom, tg.the_geom))/ST_Area(t.the_geom), pop2000*ST_Area(ST_Intersection(t.the_geom, tg.the_geom))/ST_Area(t.the_geom), popch80_90*ST_Area(ST_Intersection(t.the_geom, tg.the_geom))/ST_Area(t.the_geom), popch90_00*ST_Area(ST_Intersection(t.the_geom, tg.the_geom))/ST_Area(t.the_geom), "type", fourcolor, fips_stco, sum_acres*ST_Area(ST_Intersection(t.the_geom, tg.the_geom))/ST_Area(t.the_geom), sum_square*ST_Area(ST_Intersection(t.the_geom, tg.the_geom))/ST_Area(t.the_geom), ST_multi(ST_Intersection(t.the_geom, tg.the_geom)) FROM towns t INNER JOIN throwaway_grid tg ON ST_Intersects(t.the_geom, tg.the_geom); CREATE INDEX idx_towns_grid_the_geom ON towns_grid USING gist(the_geom);
Before and After pictures
Thursday, October 04. 2007
Can Open source Software teach you ... Posted by Regina Obe in foss at 08:50
Can Open source Software teach you anything about using Proprietary Software?
I was reading James Fee's blog recently on Open Source on the beach at Waikiki and as usual James does a good job of stirring up thinking.
One thing I found interesting about the comments was the perception a lot of people have that you choose to understand one tool or another and if you choose open source for one thing, then you are somehow taking away time to learn more useful proprietary techniques. I have found this speculation to be for the most part untrue.
For example I consider myself to be fundamentally a Microsoft SQL Server expert and fairly competent in Microsoft technologies. When I started to get involved in Open source software - such as PostgreSQL, MySQL, Linux, Mapserver, PHP, etc. I discovered something very startling. These Open source folks really care about standards.. Understanding standards is more important than understanding how to use a set piece of software because standards are more aligned with trends in technology.
For example, A long time ago, I used to think COALESCE and NULLIF were functions developed by the PostgreSQL folks. One day forgetting which database I was in, I accidentally used these in SQL Server. And guess what? SQL Server knew what they meant. Similarly SQL concepts like INTERSECT and EXCEPT existed in PostgreSQL long before SQL Server introduced it into SQL Server 2005. So when SQL Server 2005 came out, gosh darn it I already knew how to use these tools. The same holds true for PostGIS by the way. You will find that PostGIS does things the OGC standards way so when SQL Server 2008 finally comes out with OGC Spatial support, I suspect people using PostGIS and similar tools will be way ahead of the curve.
Other example - because of my exposure to open source tools such as PHP, Linux, PostgreSQL, I realized early on that Microsoft's Active Directory was nothing more than a Light-weight Directory Access Protocol (LDAP) service by another name. This had some very interesting consequences. For example I realized that it was much easier to query Microsoft Active Directory with PHP than it was even using Microsoft.NET Framework.
In the GIS world, because of my exposure to Mapserver and other open gis tools, I knew what ESRI's WFSConnector and WMSConnector were for and how to test them long before my GIS brethren knew what those terms even stood for. I knew the foundations of these and found myself explaining the concepts of web services etc. to my GIS friends.
So the point is this - if you are spending your time simply learning how to use proprietary tools instead of really trying to understand the fundamentals that drive these tools, you have totally missed the boat and you will be forever playing catchup.
I have found that Open source really enforces understanding fundamentals more than proprietary does and that is a very good thing because that knowledge has a longer shelf-life.
Tuesday, October 02. 2007
More generate_series tricks Posted by Regina Obe in generate_series, postgis postgresql at 18:09
More generate_series tricks
PostGIS generate_series tricks
generate_series comes in particularly handy for manipulating geometries in a database. The following examples are done using Postgis 1.3.1. Below are some common use cases
/**break multipolygon geometries into single polygon geometries ***/ INSERT INTO mypoly(the_geom) SELECT ST_GeometryN(the_geom, generate_series(1, ST_NumGeometries(the_geom))) AS poly FROM mymultipoly
/**The ST_ExteriorRing function only takes Polygons. If each of your geometries is a multipolygon and you wanted use ST_ExteriorRing to get the exterior line string of each polygon in each polygon, you would do something like the below. **/ SELECT a.gid, ST_ExteriorRing(ST_GeometryN(a.the_geom,ST_NumGeometries(a.the_geom))) AS a_singlelinenoholes FROM somegeomtable a
/** If you wanted to do the above but still maintain the same number of records as before, you would do this - which would give you a multilinestring geometry where each linestring represents an exterior of each polygon **/ SELECT a.gid, ST_Collect(ST_ExteriorRing(ST_GeometryN(a.the_geom,ST_NumGeometries(a.the_geom)))) AS a_multilinesnoholes FROM somegeomtable a GROUP BY a.gid
NOTE: Because of the way generate_series works we can't have 2 generate series calls in the SELECT. So if we need an inner and outer loop then we put one in the SELECT and one in the FROM and have a limiting WHERE clause as shown in the below.
/**Get all interior linestring rings (holes) of a multipolygon into a separate table with one record per interior ring. **/ SELECT ST_BuildArea(ST_InteriorRingN(ST_GeometryN(a.the_geom,gn.n),generate_series(1,ST_NumInteriorRings(ST_GeometryN(a.the_geom,gn.n))) )) AS a_hole FROM somegeomtable a, generate_series(1, (SELECT Max(ST_NumGeometries(the_geom)) FROM somegeomtable)) gn(n) WHERE ST_NumGeometries(the_geom) >= gn.n AND ST_NumInteriorRings(ST_GeometryN(the_geom,gn.n)) > 0
To a Database programmer all problems look like database problems
Doing a lot of database programming warps your thinking. Normally I would consider thinking in the languages you program in to be a bad thing because to some extent it limits your thinking to what is supported in said language. In general I think doing a lot of database programming has had positive effects on me. I find myself thinking about problems in parallel, unencumbered by the optical illusions of step dependency, and instead grouping problems in sets.
It seems that regardless of what language I program in, I see sets and patterns everywhere. When that happens I can't help but hit that particular nail with a database hammer.
Below is a PostgreSQL example that uses generate_series to generate ASP.NET gridview column markup for a month column cross tab. Originally I used AutogenerateColumns=true property of a grid view, but I needed to manipulate the formatting of my columns so that was a bit less than satisfying.
The code below generates a markup record for each grid view month column
SELECT '<asp:BoundField DataField="' || trim(to_char(date '2007-01-01' + (n || ' month')::interval, 'Month')) || '" HeaderText="' || trim(to_char(date '2007-01-01' + (n || ' month')::interval, 'Mon')) || '" ItemStyle-HorizontalAlign="Center" />' as newval FROM generate_series(0,11) n
If you were to change the code above to add a SUM aggregate function on strings - (definition of a PostgreSQL SUM aggregate for text you can find here) you would get just one row with all your markup. For this particular one we also created a column that returns the corresponding SQL for the cross tab query.
SELECT '<asp:GridView id="grv" runat="server">' || E'\r\n' || SUM('<asp:BoundField DataField="' || mth.long_mname || '" HeaderText="' || mth.short_mname || '" ItemStyle-HorizontalAlign="Center" />' || E'\r\n') || '</asp:GridView>' as aspxmarkup, 'SELECT ' || SUM('SUM(CASE WHEN report_date BETWEEN \'' || mth.start_date || '\' AND \'' || mth.end_date || '\' THEN amount ELSE NULL END) As ' || mth.long_mname || ', ' || E'\r\n') || ' SUM(amount) As total ' || E'\r\n' || ' FROM sometable WHERE report_date between \'2007-01-01\' AND \'2007-12-31\'' as sqlcrosstab FROM (SELECT (n + 1) As mnum, trim(to_char(date '2007-01-01' + (n || ' month')::interval, 'Mon')) As short_mname, trim(to_char(date '2007-01-01' + (n || ' month')::interval, 'Month')) As long_mname, date '2007-01-01' + (n || ' month')::interval As start_date, date '2007-01-01' + ((n + 1) || ' month')::interval + - '1 day'::interval As end_date FROM generate_series(0,11) n) As mth
The gridview markup output of the aspxmarkup column looks like this
<asp:GridView id="grv" runat="server" > <asp:BoundField DataField="January" HeaderText="Jan" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="February" HeaderText="Feb" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="March" HeaderText="Mar" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="April" HeaderText="Apr" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="May" HeaderText="May" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="June" HeaderText="Jun" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="July" HeaderText="Jul" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="August" HeaderText="Aug" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="September" HeaderText="Sep" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="October" HeaderText="Oct" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="November" HeaderText="Nov" ItemStyle-HorizontalAlign="Center" /> <asp:BoundField DataField="December" HeaderText="Dec" ItemStyle-HorizontalAlign="Center" /> </asp:GridView>
And the Sql output of the sqlcrosstab column looks like this
SELECT SUM(CASE WHEN report_date BETWEEN '2007-01-01 00:00:00' AND '2007-01-31 00:00:00' THEN amount ELSE NULL END) As January, SUM(CASE WHEN report_date BETWEEN '2007-02-01 00:00:00' AND '2007-02-28 00:00:00' THEN amount ELSE NULL END) As February, SUM(CASE WHEN report_date BETWEEN '2007-03-01 00:00:00' AND '2007-03-31 00:00:00' THEN amount ELSE NULL END) As March, SUM(CASE WHEN report_date BETWEEN '2007-04-01 00:00:00' AND '2007-04-30 00:00:00' THEN amount ELSE NULL END) As April, SUM(CASE WHEN report_date BETWEEN '2007-05-01 00:00:00' AND '2007-05-31 00:00:00' THEN amount ELSE NULL END) As May, SUM(CASE WHEN report_date BETWEEN '2007-06-01 00:00:00' AND '2007-06-30 00:00:00' THEN amount ELSE NULL END) As June, SUM(CASE WHEN report_date BETWEEN '2007-07-01 00:00:00' AND '2007-07-31 00:00:00' THEN amount ELSE NULL END) As July, SUM(CASE WHEN report_date BETWEEN '2007-08-01 00:00:00' AND '2007-08-31 00:00:00' THEN amount ELSE NULL END) As August, SUM(CASE WHEN report_date BETWEEN '2007-09-01 00:00:00' AND '2007-09-30 00:00:00' THEN amount ELSE NULL END) As September, SUM(CASE WHEN report_date BETWEEN '2007-10-01 00:00:00' AND '2007-10-31 00:00:00' THEN amount ELSE NULL END) As October, SUM(CASE WHEN report_date BETWEEN '2007-11-01 00:00:00' AND '2007-11-30 00:00:00' THEN amount ELSE NULL END) As November, SUM(CASE WHEN report_date BETWEEN '2007-12-01 00:00:00' AND '2007-12-31 00:00:00' THEN amount ELSE NULL END) As December, SUM(amount) As total FROM sometable WHERE report_date between '2007-01-01' AND '2007-12-31'
« previous page (Page 1 of 1, totaling 5 entries) next page »
Friday, June 02. 2023
Tuesday, May 30. 2023
Sunday, May 28. 2023
Wednesday, May 24. 2023
Tuesday, May 16. 2023
Thursday, June 08. 2023
Thursday, June 08. 2023
Wednesday, June 07. 2023
Tuesday, June 06. 2023
Tuesday, June 06. 2023