<> preface

Can you turn to this blog , We must have a certain understanding of the basic use of window function , There is no more nonsense here .

This blog post is mainly about , If you add where condition .

For the convenience of understanding , The following cases use the ranking function :row_number() Let's talk about it . The same is true for other window functions , Let's draw inferences from one instance .

You can learn from this article :

* use UNION ALL Divide and rule to solve the problem
* Use window function in a small skill to solve this problem ( Simple and efficient )
<> case

If we want to rank a group of data according to a certain column , It is generally used row_number(), But if we want to go to a column that meets the criteria row_number() Ranking ,

for instance , There is a set of data :
spark-sql (default)> select score from math_test; 30 50 101 300 456 768 999 130
350 1130 1350 1131 1150
We want to be right score Column greater than 300 Ranking based on the data of ( from 1 start ), Less than or equal to 300 Data is filled with default values null.

<> Solution

*
UNION ALL Divide and rule

The idea of this scheme is very simple , That is to extract the qualified ones , adopt row_number() Ranking , Then fill in the default values for those that do not meet the criteria null, Finally, two results are given union
all get up .

For the convenience of seeing the effect , Right after processing score A sort was made ( This step can be removed , It's just a demonstration ).
SELECT score,rk FROM ( SELECT score,ROW_NUMBER() OVER(ORDER BY score) AS rk
FROM MATH_TEST WHERE score > 300 UNION SELECT score,NULL AS rk FROM MATH_TEST
WHERE score <= 300 )T ORDER BY score -- give the result as follows : score rk 30 NULL 50 NULL 101 NULL
130 NULL 300 NULL 350 1 456 2 768 3 999 4 1130 5 1131 6 1150 7 1350 8
UNION ALL It's a simple way , It's easy to understand , But it's still too long to write , And it's a bit wordy and inefficient , Need to scan 2 Sub table .

*
Window function tips

hey , So students in need can see this way , A little skill of using window function .

Here we put in the whole SQL
SELECT score ,CASE WHEN score > 300 THEN ROW_NUMBER() OVER(PARTITION BY (CASE
WHEN score > 300 THEN 1 ELSE 2 END) ORDER BY score) ELSE NULL END AS rk FROM
math_test-- give the result as follows : score rk 350 1 456 2 768 3 999 4 1130 5 1131 6 1150 7 1350 8
30 NULL 50 NULL 101 NULL 130 NULL 300 NULL
Let's explain this way separately :
CASE WHEN score > 300 THEN < Ranking processing > ELSE NULL END AS rk
At this level, we judge score > 300 We need to do ranking , Instead, fill in null

stay < Ranking processing >:
ROW_NUMBER() OVER(PARTITION BY (CASE WHEN score > 300 THEN 1 ELSE 2 END) ORDER
BY score)
We use PARTITION BY afferent CASE WHEN score > 300 THEN 1 ELSE 2 END, take score >
300 Set to group 1, Set to group instead 2.

In this way, we will be able to “ notice ” What's important is that score > 300 Under the condition of the data , Only right score > 300 Data usage of row_number() It's sorted .

*
If not in parition by How about going to the group ?

first , The result is this :
SELECT score ,CASE WHEN score > 300 THEN ROW_NUMBER() OVER( ORDER BY score)
ELSE NULL END AS rk FROM math_test -- give the result as follows : score rk 30 NULL 50 NULL 101 NULL
130 NULL 300 NULL 350 6 456 7 768 8 999 9 1130 10 1131 11 1150 12 1350 13
We see , Although it didn't <= 300 The data for is populated as the default null, however , Because the window function is able to see “ whole ” Data for , So in over(order by
score) after , The ranking results are not what we thought .

<> summary

In this paper, a ranking case is used to illustrate and explain how to add a function in the window where The conditions achieved our expected results , You can draw inferences from one instance .
Either way is OK , But I'm still keen on the second method , Simple and efficient .

That's what it says SQL It's wonderful ~~~~

– by Two monkeys

Technology