Collection of C + + algorithm learning templates -- Number Theory

Posted by jimmyhumbled on Thu, 30 Dec 2021 07:07:04 +0100

A journey of ten thousand miles begins with a single step. This blog summarizes some number theory templates learned in the summer, so as to facilitate future query and use. The author's level is limited, and there will inevitably be omissions. I urge you to correct it. It would be a great honor if my article could help you. Thank lyh and lyk for their inspiration.

Huh? When did you write the spfa, dijkstra and Kruskal algorithms of graph theory? TAT

catalogue

1. Fast power

1. Recursive implementation of fast power

2. Implementation of bit operation

2. Matrix elimination

1. Gaussian Elimination

2. Gauss Jordan elimination

3. Matrix multiplication and fast power

1. Fast power

At ordinary times, we require the n (n is larger) power of x. how can we implement it in code?

int ans=1;

for(int i=1;i<=n;i++)
{
    ans*=x;
}//cyh's approach

There is nothing wrong with this approach, but the time complexity is O (n), so it may take too much time in the face of large numbers.

pow(x,n);//zmh: hahaha, I can use functions

Note: the return value of pow is double, which will cause floating-point error when calculating a large number!

Here we will introduce two fast power implementations: recursion and bit operation. Such as: Rogu-P1226 [template] fast power | remainder operation

1. Recursive implementation of fast power

    

typedef long long ll;
ll qpow(ll x, ll n)
{
    if(n==0)
    {
        return 1;
    }
    else if(n%2==1)
    {
        return qpow(x,n-1)*x%mod;//mod is the remainder
    }//The odd power alone comes up with an x multiplication
    else
    {
        ll ans=qpow(x,n/2)%mod;
        return ans*ans%mod;
    }//Even power, break off n half multiplication
}

Use the recursive method to divide the large number in half.

2. Implementation of bit operation

Well, although recursion is very clear, it will occupy a lot of stack space, and recursion will be slower than non recursive algorithms. Here we suggest the second algorithm using bit operation. We can write n in binary form and realize the operation by constantly shifting the binary number and taking x square at the same time.

typedef long long ll;

ll qpow(ll x,ll n)
{
    ll ans=1;
    x%=mod;//mod or remainder
    while(n)
    {
        if(n&1)//Judge whether n is odd
        {
            ans=ans*x%mod;
        }
        x=x*x%mod;
        n>>=1;
    }
    return ans;
}

2. Matrix elimination

We often use the elimination of matrix when solving the solution of multivariate primary equation or inverse matrix; The process of solving linear equations by elimination method is actually a limited operation on the coefficients and constant terms of the unknowns of the equations, and the unknown variables do not participate in the operation. Therefore, transplanting the elimination operation of solving linear equations into matrix transformation is to carry out three elementary transformations on the augmented matrix of linear equations and simplify it into row ladder matrix or row simplest matrix.

1. Gaussian Elimination

I believe you must know the specific elimination steps. I won't repeat them here. Such as: Rogu-P3389 [template] Gauss elimination method Upper Code:

#include <bits/stdc++.h>
using namespace std;

double a[105][105],ans[105];
const double eps=1e-7;

int main()
{
    ios::sync_with_stdio(false);
    cin.tie(0),cout.tie(0);
    int n;
    cin>>n;
    for(int i=1;i<=n;i++)
    {
        for(int j=1;j<=n+1;j++)
        {
            cin>>a[i][j];
        }
    }//Input matrix
    for(int i=1;i<=n;i++)
    {
        int temp=i;
        for(int j=i+1;j<=n;j++)
        {
            if(fabs(a[temp][i])<fabs(a[j][i]))
            {
                temp=j;
            }
        }
        if(fabs(a[temp][i])<eps)
        {
            cout<<"No Solution";
            return 0;
        }//When the maximum value of the front row is 0, there is no solution
        if(i!=temp)
        {
            swap(a[i],a[temp]);
        }//Exchange with the row of the largest principal element
        double mul=a[i][i];
        for(int j=i;j<=n+1;j++)
        {
            a[i][j]/=mul;
        }
        for(int j=i+1;j<=n;j++)
        {
            mul=a[j][i];
            for(int k=i;k<=n+1;k++)
            {
                a[j][k]-=a[i][k]*mul;
            }
        }
    }//Elimination process
    ans[n]=a[n][n+1];
    for(int i=n-1;i>=1;i--)
    {
        ans[i]=a[i][n+1];
        for(int j=i+1;j<=n;j++)
        {
            ans[i]-=(a[i][j]*ans[j]);
        }
    }//Rewinding process
    for(int i=1;i<=n;i++)
    {
        cout<<setiosflags(ios::fixed)<<setprecision(2)<<ans[i]<<endl;//output
    }
}

The time complexity of Gaussian elimination method is O (n^3).

2. Gauss Jordan elimination

Gauss Jordan elimination method is similar to Gauss elimination method, except that Gauss Jordan elimination method finally converts the matrix into the row simplest matrix.

 

Such as: Rogu-P4783 [template] matrix inversion Upper Code:

#include <bits/stdc++.h>
using namespace std;
typedef long long ll;

ll n,a[405][810];
const ll mod=1e9+7;
bool sign=1;

ll qpow(ll x,ll n)
{
    ll ans=1;
    x%=mod;
    while(n)
    {
        if(n&1)
        {
            ans=ans*x%mod;
        }
        x=x*x%mod;
        n>>=1;
    }
    return ans;
}

void gj()
{
	for(int i=1;i<=n;++i)
    {
		int temp=i;
		for(int j=i+1;j<=n;j++)
        {
            if(a[j][i]>a[temp][i])
            {
                temp=j;
            }
        }
		if(temp!=i)
        {
             swap(a[i],a[temp]);
        }
		if(!a[i][i])
        {
            sign=0;
            return ;
        }
		ll m=qpow(a[i][i],mod-2);
		for(int k=1;k<=n;k++)
        {
			if(k==i)
			{
			    continue;
			}
			ll p=a[k][i]*m%mod;
			for(int j=i;j<=(n<<1);j++)
            {
                a[k][j]=((a[k][j]-p*a[i][j])%mod+mod)%mod;
            }
		}

		for(int j=1;j<=(n<<1);j++)
        {
             a[i][j]=(a[i][j]*m%mod);
        }
	}
}//Similar to Gaussian elimination

int main()
{
    ios::sync_with_stdio(false);
    cin.tie(0),cout.tie(0);
    cin>>n;
    for(int i=1;i<=n ;i++)
    {
        for(int j=1;j<=n;j++)
        {
            cin>>a[i][j];
            a[i][i+n]=1;
        }
    }
    gj();
    if(sign)
    {
        for(int i=1;i<=n;i++)
        {
            for(int j=n+1;j<=(n<<1);j++)
            {
                cout<<a[i][j];
                if(j<(n<<1))
                {
                    cout<<' ';
                }
            }
            if(i<n)
            {
                cout<<endl;
            }
        }
    }//Inverse matrix output
    else
    {
        cout<<"No Solution";
    }
    return 0;
}

Gauss Jordan elimination method is similar to Gauss elimination. However, it should be noted that there is one more identity matrix.

3. Matrix multiplication and fast power

After looking at matrix elimination, we might as well think about how to realize the multiplication between matrices? What? Matrices also have fast powers? Such as: Luogu-P3390 [template] matrix fast power Upper code

#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll mod=1e9+7;
ll n,k;

struct Matrix
{
    int n,m;//Number of matrix rows and columns
    ll matrix[105][105];//"Content" of matrix
    Matrix(int x,int y)
    {
        n=x,m=y;
        memset(matrix,0,sizeof(matrix));
    }//Constructor Initializers 
};//Using structure storage matrix

Matrix mul(Matrix a,Matrix b)
{
    Matrix ans(a.n,b.m);
    for(int i=1;i<=ans.n;i++)
    {
        for(int j=1;j<=ans.m;j++)
        {
            for(int k=1;k<=a.m;k++)
            {
                ans.matrix[i][j]+=a.matrix[i][k]*b.matrix[k][j]%mod;
                ans.matrix[i][j]%=mod;
            }
        }
    }
    return ans;
}//Implementation of matrix multiplication

Matrix qpow(Matrix m,ll x)
{
    Matrix ans(n,n);
    memset(ans.matrix,0,sizeof(ans.matrix));
    for(int i=1;i<=n;i++)
    {
        ans.matrix[i][i]=1;
    }
    while(x)
    {
        if(x&1)
        {
            ans=mul(ans,m);
        }
        m=mul(m,m);
        x>>=1;
    }
    return ans;
}//Matrix fast power

int main()
{
    cin>>n>>k;
    Matrix M(n,n);
    M.m=n,M.n=n;
    for(int i=1;i<=n;i++)
    {
        for(int j=1;j<=n;j++)
        {
            cin>>M.matrix[i][j];
        }
    }
    M=qpow(M,k);//Matrix fast power
    for(int i=1;i<=n;i++)
    {
        for(int j=1;j<=n;j++)
        {
            cout<<M.matrix[i][j];
            if(j<n)
            {
                cout<<' ';
            }
        }
        if(i<n)
        {
            cout<<endl;
        }
    }//Output of answer matrix
    return 0;
}

Matrix multiplication will not be repeated here. I believe you can understand its code implementation. I know matrix multiplication. As for the fast power of matrix, we might as well compare it with the guy in the first section above.

"No matter how high the mountain is, you can always climb to the top; no matter how long the road is, you will reach it." It's a long way to go. I'll look up and down and encourage each other!  

Topics: Algorithm