Scenario: now there is an httpbin service directly registered in the grid through service entry, but the domain name cannot be resolved by using. The following is the actual error. We need to solve the problem now
/ # curl httpbin.remote:30655/ip curl: (6) Could not resolve host: httpbin.remote
First check for Configure DNS
apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: proxyMetadata: # Enable basic DNS proxying ISTIO_META_DNS_CAPTURE: "true" # Enable automatic address allocation, optional ISTIO_META_DNS_AUTO_ALLOCATE: "true"
I checked and found that dns has been configured, which is eliminated in this step
Then check whether the DNS service is normal. Here is a brief explanation: the high version of istio uses smart DNS, while the low version refers to the version before 1.8 uses istio coredns. Here I use istio coredns, so I can directly check the coredns status.
# kubectl get pod -n istio-system -l app=istiocoredns NAME READY STATUS RESTARTS AGE istiocoredns-84867bdf54-v8vbb 2/2 Running 0 51d
dns is running now, so let's query whether the dns service is running normally through the log
# kubectl logs -f istiocoredns-84867bdf54-v8vbb -n istio-system -c istio-coredns-plugin --tail 100 unknown field "workloadSelector" in v1alpha3.ServiceEntry map[addresses:[240.0.0.2] hosts:[vm-t.vm.svc.cluster.local] location:MESH_INTERNAL ports:[map[name:http-http-8000 number:8000 protocol:http]] resolution:STATIC workloadSelector:map[labels:map[service.istio.io/canonical-revision:v1 version_mesh:v1 app_mesh:vm-t bocloud.com.cn/vm:true service.istio.io/canonical-name:vm-t]]] unknown field "workloadSelector" in v1alpha3.ServiceEntry map[addresses:[240.0.0.2] hosts:[vm-t.vm.svc.cluster.local] location:MESH_INTERNAL ports:[map[name:http-http-8000 number:8000 protocol:http]] resolution:STATIC workloadSelector:map[labels:map[service.istio.io/canonical-name:vm-t service.istio.io/canonical-revision:v1 version_mesh:v1 app_mesh:vm-t bocloud.com.cn/vm:true]]] unknown field "targetPort" in v1alpha3.Port map[location:MESH_INTERNAL ports:[map[targetPort:15443 name:http-9080 number:9080 protocol:HTTP]] resolution:DNS hosts:[productpage.pyqns.svc.cluster.local]] 2022-03-08T05:55:02.752127Z error Failed to convert service-entry object, ignoring: vm/vm-test YAML decoding error: addresses: 2022-03-08T06:44:22.762466Z info Have 7 service entries 2022-03-08T06:44:22.762609Z info adding DNS mapping: productpage.bookinfo-test.global.->[240.0.0.8] 2022-03-08T06:44:22.762618Z info adding DNS mapping: reviews.bookinfo-test.global.->[240.0.0.9] 2022-03-08T06:44:22.762624Z info adding DNS mapping: http.app.->[240.0.0.100] 2022-03-08T06:44:22.762631Z info adding DNS mapping: details.bookinfo-test.global.->[240.0.0.6] 2022-03-08T06:44:22.762637Z info adding DNS mapping: ratings.bookinfo-test.global.->[240.0.0.7] 2022-03-08T06:44:27.761222Z info Reading service entries at 2022-03-08 06:44:27.761157157 +0000 UTC m=+4319335.422405709
It is obvious from the log
1: The wrong serviceentry uses the fields of workload selector and targetPort, but the current version of istio does not support it
2: Some service entries use addresses, resulting in coding failure
3: Have 7 service entries proves that we have 7 service entries, but actually only 5 service entries are assigned ip addresses, that is, there is a problem that the host cannot be resolved in another service entry
It is quite clear that debug has been carried out until now. Solve the above problems one by one
- Question 1: directly delete the service entry using workload selector and targetPort because the service entry is not deleted after redeployment using a higher version of istio
- Question 2: if the address meets the specification when checking, just reassign the ip address
- The most important problem 3, the most important problem at present, is that ip is not allocated to the host, because istio is configured in iop_ META_ DNS_ AUTO_ The allocate parameter can help us allocate ip automatically, so I didn't explicitly configure the address field in the service entry. This leads to this problem. Now add the address and query the log
spec: addresses: - 240.0.0.88 endpoints: - address: 10.20.21.20 ports: http-10: 30655 hosts: - httpbin.remote location: MESH_INTERNAL ports: - name: http-10 number: 8100 protocol: http resolution: DNS
Query log
2022-03-08T07:05:32.749801Z info Reading service entries at 2022-03-08 07:05:32.749760496 +0000 UTC m=+4320600.411009043 2022-03-08T07:05:32.752280Z info Have 7 service entries 2022-03-08T07:05:32.752383Z info adding DNS mapping: details.bookinfo-test.global.->[240.0.0.6] 2022-03-08T07:05:32.752408Z info adding DNS mapping: ratings.bookinfo-test.global.->[240.0.0.7] 2022-03-08T07:05:32.752413Z info adding DNS mapping: http.app.->[240.0.0.100] 2022-03-08T07:05:32.752418Z info adding DNS mapping: httpbin.remote.->[240.0.0.88] 2022-03-08T07:05:32.752423Z info adding DNS mapping: productpage.bookinfo-test.global.->[240.0.0.8] 2022-03-08T07:05:32.752428Z info adding DNS mapping: reviews.bookinfo-test.global.->[240.0.0.9] 2022-03-08T07:05:37.749806Z info Reading service entries at 2022-03-08 07:05:37.749761643 +0000 UTC m=+4320605.411010374
Sure enough, you can see httpbin. Com in the query log Remote mapped ip 240.0.0.88 has been added to DNS.
As for ITO_ META_ DNS_ AUTO_ If the allocate parameter does not take effect, it needs to be studied. It may only take effect in version 1.8.4.