CSDN integral rule
The integration rules are as follows:
1. 10 points for each original or translated article published.
2. 2 points for each reprinted article published.
3. Blogger's article will get 1 point for each comment.
4. Each comment: 1 point can be obtained (no points will be obtained for self comments and bloggers replying to comments).
5. For every more than 100 times of reading blog posts, you can get 1 point, and the maximum reading bonus points will be added to 100 points, that is, the end of tens of thousands of clicks on the article.
6. If the article is deleted by the administrator or the blogger himself, the score obtained by the blogger based on the article will be deducted accordingly.
7. The comments are deleted by the administrator or blogger, and the scores obtained by the commenter and the blogger based on the comments are deducted accordingly (the points to be deducted by the blogger will not be removed dynamically in real time, but will be cleared at a fixed time every week).
8. In addition, the corresponding plagiarism reporting function will be set up. Once the report confirms the plagiarism of an original article, the corresponding score of the blogger will be deducted.
method:
Write as many blog posts as possible
According to: you can get 10 points for each original or translated article published. As long as you can write one or two articles every day, you can improve more than 400 points a month (very high).
Other benefits of blogging
Suppose an article has 30 visits every day and updates two articles every day, then you can get 2 + (2 * 2) + (2 * 3) in a month, Write a program:
day_write = 2 num_csdn = 0 day_read = 30 for i in range(1,31): num_csdn += day_write * i num_csdn = num_csdn * day_read print("The number of visits is{}second".format(num_csdn))
Run it and get 27900 visits. How about writing a blog can improve a lot.
Brush visits
It is not recommended to use and is easy to be sealed. This is the only way to learn and use.
import requests import re import time payload = "" # Request header headers = { "Accept": "*/*", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3", "Cookie": "l=AurqcPuigwQdnQv7WvAfCoR1OlrRQW7h; isg=BHp6mNB79CHqYXpVEiRteXyyyKNcg8YEwjgLqoRvCI3ddxqxbLtOFUBGwwOrZ3ad; thw=cn; cna=VsJQERAypn0CATrXFEIahcz8; t=0eed37629fe7ef5ec0b8ecb6cd3a3577; tracknick=tb830309_22; _cc_=UtASsssmfA%3D%3D; tg=0; ubn=p; ucn=unzbyun; x=e%3D1%26p%3D*%26s%3D0%26c%3D0%26f%3D0%26g%3D0%26t%3D0%26__ll%3D-1%26_ato%3D0; miid=981798063989731689; hng=CN%7Czh-CN%7CCNY%7C156; um=0712F33290AB8A6D01951C8161A2DF2CDC7C5278664EE3E02F8F6195B27229B88A7470FD7B89F7FACD43AD3E795C914CC2A8BEB1FA88729A3A74257D8EE4FBBC; enc=1UeyOeN0l7Fkx0yPu7l6BuiPkT%2BdSxE0EqUM26jcSMdi1LtYaZbjQCMj5dKU3P0qfGwJn8QqYXc6oJugH%2FhFRA%3D%3D; ali_ab=58.215.20.66.1516409089271.6; mt=ci%3D-1_1; cookie2=104f8fc9c13eb24c296768a50cabdd6e; _tb_token_=ee7e1e1e7dbe7; v=0", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64;` rv:47.0) Gecko/20100101 Firefox/47.0" } # Get article list urls def getUrls(url): # Send request resp = requests.request("GET", url, data=payload, headers=headers) #Set decoding mode resp.encoding = resp.apparent_encoding #The set decoding method will be used here html_source = resp.text # Regular expression, take out the url link in the web page (this is also done by some tools to find the injection point) urls = re.findall("https://[^>\";\']*\d", html_source) new_urls = [] for url in urls: if 'details' in url: if url not in new_urls: new_urls.append(url) return new_urls sum = 0 urls = getUrls("https://blog.csdn.net/Terry_20100630") while True: for url in urls: sum += 1 requests.request("GET", url, data=payload, headers=headers) print(url, sum) time.sleep(5) time.sleep(5)
You can copy and paste directly. Don't change the data at will. Don't use it in pychart. It can only be used in turtle editor (personal test).