SQL grupė pagal mokymo programą: skaičius, suma, vidurkis ir aiškinamos sąlygos

GROUP BYSąlyga yra galingas, tačiau kartais sudėtinga pareiškimas galvoti apie tai.

Net po aštuonerių metų kiekvieną kartą naudodamasis GROUP BYturiu sustoti ir pagalvoti, ką jis iš tikrųjų veikia.

Šiame straipsnyje mes apžvelgsime, kaip sudaryti GROUP BYsąlygą, ką ji daro su jūsų užklausa ir kaip galite ją naudoti kaupdami duomenis ir rinkdami įžvalgas apie savo duomenis.

Štai ką aptarsime:

  • Duomenų bazės nustatymas
  • Duomenų pavyzdžių nustatymas (pardavimo sukūrimas)
  • Kaip veikia GROUP BYdarbas?
  • Rašymo GROUP BYsąlygos
  • Sankaupos ( COUNT, SUM, AVG)
  • Darbas su keliomis grupėmis
  • Funkcijų naudojimas GROUP BY
  • Filtruojamos grupės naudojant HAVING
  • Suvestiniai su numanomu grupavimu

Duomenų bazės nustatymas

Kad galėtume rašyti savo užklausas, turime sukonfigūruoti savo duomenų bazę.

Šiuose pavyzdžiuose naudosime „PostgreSQL“, tačiau čia pateiktos užklausos ir sąvokos bus lengvai išverstos į bet kurią kitą modernią duomenų bazių sistemą (pvz., „MySQL“, „SQL Server“ ir pan.).

Norėdami dirbti su „PostgreSQL“ duomenų baze, galime naudoti psql - interaktyvią „PostgreSQL“ komandų eilutės programą. Jei turite kitą duomenų bazės klientą, su kuriuo jums patinka dirbti, tai taip pat gerai.

Norėdami pradėti, sukurkime savo duomenų bazę. Kai „PostgreSQL“ jau įdiegta, galime paleisti komandą createdb savo terminale, kad sukurtume naują duomenų bazę. Aš paskambinau savo fcc:

$ createdb fcc 

Tada pradėkime interaktyvią konsolę naudodami komandą psqlir prisijunkite prie duomenų bazės, kurią ką tik sukūrėme naudodami \c :

$ psql psql (11.5) Type "help" for help. john=# \c fcc You are now connected to database "fcc" as user "john". fcc=# 
Pastaba: Aš išvaliau psqlšių pavyzdžių išvestį, kad būtų lengviau ją perskaityti, todėl nesijaudinkite, jei čia rodoma išvestis nėra tokia, kokią matėte savo terminale.

Raginu jus sekti šiuos pavyzdžius ir vykdyti šias užklausas sau. Išmoksite ir prisiminsite kur kas daugiau, dirbdami su šiais pavyzdžiais, o ne tik skaitydami.

Duomenų nustatymas (pardavimo sukūrimas)

Savo pavyzdžiams naudosime lentelę, kurioje saugomi įvairių produktų pardavimo įrašai skirtingose ​​parduotuvių vietose.

Mes pavadinsime šią lentelę salesir tai bus paprastas parduotuvės pardavimo vaizdas: vietos pavadinimas, produkto pavadinimas, kaina ir pardavimo laikas.

Jei statytume šią lentelę realioje programoje, mes nustatytume svetimus raktus prie kitų lentelių (pavyzdžiui, locationsar products). Tačiau iliustruodami GROUP BYsąvokas naudosime paprastus TEXTstulpelius.

Sukurkime lentelę ir įterpkime keletą pardavimo duomenų:

CREATE TABLE sales( location TEXT, product TEXT, price DECIMAL, sold_at TIMESTAMP ); INSERT INTO sales(location, product, price, sold_at) VALUES ('HQ', 'Coffee', 2, NOW()), ('HQ', 'Coffee', 2, NOW() - INTERVAL '1 hour'), ('Downtown', 'Bagel', 3, NOW() - INTERVAL '2 hour'), ('Downtown', 'Coffee', 2, NOW() - INTERVAL '1 day'), ('HQ', 'Bagel', 2, NOW() - INTERVAL '2 day'), ('1st Street', 'Bagel', 3, NOW() - INTERVAL '2 day' - INTERVAL '1 hour'), ('1st Street', 'Coffee', 2, NOW() - INTERVAL '3 day'), ('HQ', 'Bagel', 3, NOW() - INTERVAL '3 day' - INTERVAL '1 hour'); 

Mes turime tris vietas: būstinę , miesto centrą ir 1-ąją gatvę.

Mes turime du produktus - „ Coffee“ ir „ Bagel“. Šiuos pardavimus įterpiame skirtingomis sold_atvertėmis, kad atspindėtume prekes, parduodamas skirtingomis dienomis ir laiku.

Yra keletas išpardavimų šiandien, kai kurie vakar, o kiti iš užvakar.

Kaip veikia GROUP BYdarbas?

Norėdami iliustruoti GROUP BYsakinio veikimą, pirmiausia pakalbėkime per pavyzdį.

Įsivaizduokite, kad mes turime pilną kambarį žmonių, kurie gimė skirtingose ​​šalyse.

Jei norėtume rasti vidutinį kambario žmonių ūgį kiekvienoje šalyje, pirmiausia paprašytume šių žmonių suskirstyti į grupes pagal jų gimimo šalį.

Kai jie buvo suskirstyti į savo grupes, mes galėjome apskaičiuoti vidutinį tos grupės aukštį.

Taip veikia GROUP BYsąlyga. Pirmiausia mes apibrėžiame, kaip mes norime sugrupuoti eilutes - tada mes galime atlikti grupių skaičiavimus ar sujungimus.

Kelios grupės

Duomenis galime sugrupuoti į tiek grupių ar pogrupių, kiek norime.

Pavyzdžiui, paprašę žmonių susiskirstyti į grupes pagal jų gimimo šalis, galėtume pasakyti kiekvienai iš tų šalių grupių toliau skirstytis į grupes pagal jų akių spalvą.

Tai darydami turime žmonių grupes pagal jų gimimo šalies ir akių spalvos derinį .

Dabar mes galėtume rasti vidutinį aukštį kiekvienoje iš šių mažesnių grupių ir turėtume tikslesnį rezultatą: vidutinis vienos šalies akių aukštis .

GROUP BYsąlygos dažnai naudojamos tais atvejais, kai galite naudoti frazę kiekvienam dalykui arba kiekvienam dalykui :

  • Vidutinis aukštis vienam gimimo šalyje
  • Bendras žmonių skaičius už kiekvieną akių ir plaukų spalvų derinį
  • Bendras vieno produkto pardavimas

Rašymo GROUP BYsąlygos

GROUP BYSąlyga yra labai lengva rašyti, mes tiesiog naudokite raktažodžius GROUP BYir tada nurodykite lauką (-us) norime grupę:

SELECT ... FROM sales GROUP BY location;

Ši paprasta užklausa sugrupuoja mūsų salesduomenis pagal locationstulpelį.

Mes atlikome grupavimą, bet ką mes įtraukėme į savo grupę SELECT?

Akivaizdus dalykas, kurį reikia pasirinkti, yra mūsų - locationmes grupuojamės pagal jį, kad mes bent jau norėtume pamatyti savo sukurtų grupių pavadinimus:

SELECT location FROM sales GROUP BY location; 

Rezultatas yra trys mūsų vietos:

 location ------------ 1st Street HQ Downtown (3 rows) 

If we look at our raw table data (SELECT * FROM sales;), we'll see that we have four rows with a location of HQ, two rows with a location of Downtown, and two rows with a location of 1st Street:

 product | location | price | sold_at ---------+------------+-------+---------------------------- Coffee | HQ | 2 | 2020-09-01 09:42:33.085995 Coffee | HQ | 2 | 2020-09-01 08:42:33.085995 Bagel | Downtown | 3 | 2020-09-01 07:42:33.085995 Coffee | Downtown | 2 | 2020-08-31 09:42:33.085995 Bagel | HQ | 2 | 2020-08-30 09:42:33.085995 Bagel | 1st Street | 3 | 2020-08-30 08:42:33.085995 Coffee | 1st Street | 2 | 2020-08-29 09:42:33.085995 Bagel | HQ | 3 | 2020-08-29 08:42:33.085995 (8 rows) 

By grouping on the location column, our database takes these inputs rows and identifies the unique locations among them—these unique locations serve as our "groups."

But what about the other columns in our table?

If we try to select a column like product that we didn't group by...

SELECT location, product FROM sales GROUP BY location; 

...we run into this error:

ERROR: column "sales.product" must appear in the GROUP BY clause or be used in an aggregate function 

The problem here is we've taken eight rows and squished or distilled them down to three.

We can't just return the rest of the columns like normal—we had eight rows, and now we have three.

What do we do with the remaining five rows of data? Which of the eight rows' data should be displayed on these three distinct location rows?

There's not a clear and definitive answer here.

To use the rest of our table data, we also have to distill the data from these remaining columns down into our three location groups.

This means that we have to aggregate or perform a calculation to produce some kind of summary information about our remaining data.

Aggregations (COUNT, SUM, AVG)

Once we've decided how to group our data, we can then perform aggregations on the remaining columns.

These are things like counting the number of rows per group, summing a particular value across the group, or averaging information within the group.

To start, let's find the number of sales per location.

Since each record in our sales table is one sale, the number of sales per location would be the number of rows within each location group.

To do this we'll use the aggregate function COUNT() to count the number of rows within each group:

SELECT location, COUNT(*) AS number_of_sales FROM sales GROUP BY location; 

We use COUNT(*) which counts all of the input rows for a group.

(COUNT() also works with expressions, but it has slightly different behavior.)

Here's how the database executes this query:

  • FROM sales — First, retrieve all of the records from the sales table
  • GROUP BY location — Next, determine the unique location groups
  • SELECT ... — Finally, select the location name and the count of the number of rows in that group

We also give this count of rows an alias using AS number_of_sales to make the output more readable. It looks like this:

 location | number_of_sales ------------+----------------- 1st Street | 2 HQ | 4 Downtown | 2 (3 rows) 

The 1st Street location has two sales, HQ has four, and Downtown has two.

Here we can see how we've taken the remaining column data from our eight independent rows and distilled them into useful summary information for each location: the number of sales.

SUM

In a similar way, instead of counting the number of rows in a group, we could sum information within the group—like the total amount of money earned from those locations.

To do this we'll use the SUM() function:

SELECT location, SUM(price) AS total_revenue FROM sales GROUP BY location; 

Instead of counting the number of rows in each group we sum the dollar amount of each sale, and this shows us the total revenue per location:

 location | total_revenue ------------+--------------- 1st Street | 5 HQ | 9 Downtown | 5 (3 rows) 

Average (AVG)

Finding the average sale price per location just means swapping out the SUM() function for the AVG() function:

SELECT location, AVG(price) AS average_revenue_per_sale FROM sales GROUP BY location; 

Working with multiple groups

So far we've been working with just one group: location.

What if we wanted to sub-divide that group even further?

Similar to the "birth countries and eye color" scenario we started with, what if we wanted to find the number of sales per product per location?

To do this all we need to do is add the second grouping condition to our GROUP BY statement:

SELECT ... FROM sales GROUP BY location, product;

By adding a second column in our GROUP BY we further sub-divide our location groups into location groups per product.

Kadangi dabar mes taip pat grupuojamės pagal productstulpelį, dabar galime jį grąžinti į savo SELECT!

(Aš užmesiu keletą ORDER BYklausimų šioms užklausoms, kad būtų lengviau perskaityti išvestį.)

SELECT location, product FROM sales GROUP BY location, product ORDER BY location, product; 

Žvelgdami į naujojo grupavimo rezultatą, galime pamatyti savo unikalius vietovės / produkto derinius:

 location | product ------------+--------- 1st Street | Bagel 1st Street | Coffee Downtown | Bagel Downtown | Coffee HQ | Bagel HQ | Coffee (6 rows) 

Dabar, kai turime savo grupes, ką norime daryti su likusiais savo stulpelių duomenimis?

Na, galime rasti pardavimo už produktą vienoje vietoje skaičių naudodami tas pačias agregavimo funkcijas kaip ir anksčiau:

SELECT location, product, COUNT(*) AS number_of_sales FROM sales GROUP BY location, product ORDER BY location, product; 
 location | product | number_of_sales ------------+---------+----------------- 1st Street | Bagel | 1 1st Street | Coffee | 1 Downtown | Bagel | 1 Downtown | Coffee | 1 HQ | Bagel | 2 HQ | Coffee | 2 (6 rows) 
Kaip „ The Reader ™“ pratimas: raskite visas kiekvieno produkto pajamas (sumas) vienoje vietoje.

Funkcijų naudojimas GROUP BY

Toliau pabandykime surasti bendrą pardavimų skaičių per dieną .

If we follow a similar pattern as we did with our locations and group by our sold_at column...

SELECT sold_at, COUNT(*) AS sales_per_day FROM sales GROUP BY sold_at ORDER BY sold_at; 

...we might expect to have each group be each unique day—but instead we see this:

 sold_at | sales_per_day ----------------------------+--------------- 2020-08-29 08:42:33.085995 | 1 2020-08-29 09:42:33.085995 | 1 2020-08-30 08:42:33.085995 | 1 2020-08-30 09:42:33.085995 | 1 2020-08-31 09:42:33.085995 | 1 2020-09-01 07:42:33.085995 | 1 2020-09-01 08:42:33.085995 | 1 2020-09-01 09:42:33.085995 | 1 (8 rows) 

It looks like our data isn't grouped at all—we get each row back individually.

But, our data is actually grouped! The problem is each row's sold_at is a unique value—so every row gets its own group!

The GROUP BY is working correctly, but this is not the output we want.

The culprit is the unique hour/minute/second information of the timestamp.

Each of these timestamps differ by hours, minutes, or seconds—so they are each placed in their own group.

We need to convert each of these date and time values into just a date:

  • 2020-09-01 08:42:33.085995 =>2020-09-01
  • 2020-09-01 09:42:33.085995 =>2020-09-01

Converted to a date, all of the timestamps on the same day will return the same date value—and will therefore be placed into the same group.

To do this, we'll cast the sold_at timestamp value to a date:

SELECT sold_at::DATE AS date, COUNT(*) AS sales_per_day FROM sales GROUP BY sold_at::DATE ORDER BY sold_at::DATE; 

In our GROUP BY clause we use ::DATE to truncate the timestamp portion down to the "day." This effectively chops off the hours/minutes/seconds of the timestamp and just returns the day.

In our SELECT, we also return this same expression and give it an alias to pretty up the output.

For the same reason we couldn't return product without grouping by it or performing some kind of aggregation on it, the database won't let us return just sold_at—everything in the SELECT must either be in the GROUP BY or some kind of aggregate on the resulting groups.

The result is the sales per day that we originally wanted to see:

 date | sales_per_day ------------+--------------- 2020-08-29 | 2 2020-08-30 | 2 2020-08-31 | 1 2020-09-01 | 3 (4 rows) 

Filtering groups with HAVING

Next let's look at how to filter our grouped rows.

To do this, let's try to find days where we had more than one sale.

Without grouping, we would normally filter our rows by using a WHERE clause. For example:

SELECT * FROM sales WHERE product = 'Coffee'; 

With our groups, we may want to do something like this to filter our groups based on the count of rows...

SELECT sold_at::DATE AS date, COUNT(*) AS sales_per_day FROM sales WHERE COUNT(*) > 1 -- filter the groups? GROUP BY sold_at::DATE; 

Unfortunately, this doesn't work and we receive this error:

ERROR:  aggregate functions are not allowed in WHERE

Aggregate functions are not allowed in the WHERE clause because the WHERE clause is evaluated before the GROUP BY clause—there aren't any groups yet to perform calculations on.

But, there is a type of clause that allows us to filter, perform aggregations, and it is evaluated after the GROUP BY clause: the HAVING clause.

The HAVING clause is like a WHERE clause for your groups.

To find days where we had more than one sale, we can add a HAVING clause that checks the count of rows in the group:

SELECT sold_at::DATE AS date, COUNT(*) AS sales_per_day FROM sales GROUP BY sold_at::DATE HAVING COUNT(*) > 1; 

This HAVING clause filters out any rows where the count of rows in that group is not greater than one, and we see that in our result set:

 date | sales_per_day ------------+--------------- 2020-09-01 | 3 2020-08-29 | 2 2020-08-30 | 2 (3 rows) 

Just for the sake of completeness, here's the order of execution for all parts of a SQL statement:

  • FROM — Retrieve all of the rows from the FROM table
  • JOIN — Perform any joins
  • WHERE — Filter rows
  • GROUP BY - Form groups
  • HAVING - Filter groups
  • SELECT - Select the data to return
  • ORDER BY - Order the output rows
  • LIMIT - Return a certain number of rows

Aggregates with implicit grouping

The last topic we'll look at is aggregations that can be performed without a GROUP BY—or maybe better said they have an implicitgrouping.

These aggregations are useful in scenarios where you want to find one particular aggregate from a table—like the total amount of revenue or the greatest or least value of a column.

For example, we could find the total revenue across all locations by just selecting the sum from the entire table:

SELECT SUM(price) FROM sales; 
 sum ----- 19 (1 row) 

So far we've done $19 of sales across all locations (hooray!).

Another useful thing we could query is the first or last of something.

For example, what is the date of our first sale?

To find this we just use the MIN() function:

SELECT MIN(sold_at)::DATE AS first_sale FROM sales; 
 first_sale ------------ 2020-08-29 (1 row) 

(To find the date of the last sale just substitute MAX()for MIN().)

Using MIN / MAX

While these simple queries can be useful as a standalone query, they're often parts of filters for larger queries.

For example, let's try to find the total sales for the last day that we had sales.

One way we could write that query would be like this:

SELECT SUM(price) FROM sales WHERE sold_at::DATE = '2020-09-01'; 

This query works, but we've obviously hardcoded the date of 2020-09-01.

09/01/2020 may be the last date we had a sale, but it's not always going to be that date. We need a dynamic solution.

This can be achieved by combining this query with the MAX() function in a subquery:

SELECT SUM(price) FROM sales WHERE sold_at::DATE = ( SELECT MAX(sold_at::DATE) FROM sales ); 

In our WHERE clause we find the largest date in our table using a subquery: SELECT MAX(sold_at::DATE) FROM sales.

Then, we use this max date as the value we filter the table on, and sum the price of each sale.

Implicit grouping

I say that these are implicit groupings because if we try to select an aggregate value with a non-aggregated column like this...

SELECT SUM(price), location FROM sales; 

...we get our familiar error:

ERROR: column "sales.location" must appear in the GROUP BY clause or be used in an aggregate function 

GROUP BY is a tool

As with many other topics in software development, GROUP BY is a tool.

There are many ways to write and re-write these queries using combinations of GROUP BY, aggregate functions, or other tools like DISTINCT, ORDER BY, and LIMIT.

Understanding and working with GROUP BY's will take a little bit of practice, but once you have it down you'll find an entirely new batch of problems are now solvable to you!

Jei jums patiko šis įrašas, galite sekti mane „Twitter“ tinkle, kur kalbu apie duomenų bazės dalykus ir kaip pasisekti kūrėjo karjeroje.

Ačiū, kad skaitėte!

Jonas