安装与配置mysql

安装

linux安装mysql

mysql 相关操作

建表

create database c2c_groupon
CREATE TABLE collections ( id bigint unsigned default 0 not null comment '主键', 
created_at timestamp default CURRENT_TIMESTAMP not null comment '创建时间', 
updated_at timestamp default CURRENT_TIMESTAMP not null on update CURRENT_TIMESTAMP comment '修改时间', 
seller_id bigint unsigned default 0 not null comment '卖家id', 
cover varchar(255) default '' not null comment '挂单封面图', 
description varchar(255) default '' not null comment '挂单描述', status enum ('DEFAULT', 'PROCESSING', 'ENDING', 'CHECKING', 'END', 'OFF_SALE', 'SELLER_CONFIRMING') default 'DEFAULT' not null comment '挂单状态(进行中processing,截团中ending,退款处理中checking,已结束 end,已下架off_sale)', 
end_time timestamp default '1970-01-01 08:00:01' not null comment '结束时间', 
is_deleted tinyint default 0 not null comment '是否删除', 
priority int default 0 not null comment '分发/排序', 
title varchar(50) default '' not null comment '挂单名称', 
express json not null comment '邮费设置', 
type_id bigint unsigned default 0 not null comment '类目', 
tag_ids json not null comment 'tagIds', 
mega_spu_ids json not null comment 'megaSPuIds', 
is_only_text tinyint default 0 not null comment '是否只有文字,否的情况下必须传图片', 
confirm_time timestamp null comment '需要在此时间之前确认', 
images json null comment '图片描述', 
pay_method enum ('ONLINE', 'OFFLINE') default 'ONLINE' not null comment '收款方式', 
meta json not null comment '额外信息' ) comment '挂单信息表' charset = utf8mb4;

需要注意 mysql8.0 中不再支持timestamp设置0000-00-00 00:00:00,范围从1970-01-01 00:00:01 UTC 开始,因此在+8 时区,需要从1970-01-01 08:00:01 开始

导入数据

使用csv导入数据(failed)

load data infile '/var/lib/mysql-files/collections.csv' 
into table collections 
FIELDS TERMINATED BY ','  -- 字段分隔符
LINES TERMINATED BY '\n' -- 行分隔符
IGNORE 1 ROWS 
set confirm_time = nullif(@confirm_time, '')  -- 如果字段为空,则设置为null,需要用@表示字段 

ERROR 1290 (HY000): The MySQL server is running with the –secure-file-priv option so it cannot execute this statement

两种处理方式

  1. SHOW VARIABLES LIKE "secure_file_priv" 查看那个目录在配置中
  2. 在mysql启动时,去掉这个选项

ERROR 3140 (22032): Invalid JSON text: “The document root must not be followed by other values.” at position 3 in value for column ‘collections.express’.

可能由于json中包含字段分隔符 “,”,导致json分割错误

使用sql导入

source /home/abc/abc.sql

开启binlog

vim /etc/mysql/my.cnf
[mysqld]
log-bin = log-name # 开启
binlog-format = ROW
service_id = 1 #配置mysql replicationi需要定义,不能与canal的slaveIdi重复
# 查看是否开启
show variables like "log_bin"

查看binlog

show binlog events;   #只查看第一个binlog文件的内容

show binlog events` `in` `'mysql-bin'``;#查看指定binlog文件的内容

show binary logs;  #获取binlog文件列表

show master status #查看当前正在写入的binlog文件

mysql自带工具 mysqlbinlog

mysqlbinlog -vv /var/lib/mysql/binlog.000053
# -vv 查看row文件

常用binlog指令

为cancal创建用户

CREATE USER canal IDENTIFIED BY 'canal';
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
FLUSH PRIVILEGES;

安装依赖

安装java11

apt install openjdk-11-jdk -y

es

安装

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

sudo apt-get install apt-transport-https

echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list

sudo apt-get update && sudo apt-get install elasticsearch

启动

systemctl start elasticsearch.service


不能使用root用户启动,暂时放弃

docker 启动es

docker network create es
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.13.1
# start

docker run -d --name es01 --net es -p 9200:9200 -it -m 1GB docker.elastic.co/elasticsearch/elasticsearch:8.13.1

报错 连接mysql失败
mysql 8.0.3+ 身份认证插件默认使用caching_sha2_password
修改为mysql_native_password
select host,user,plugin from mysql.user ;
ALTER USER ‘canal’@’%’ IDENTIFIED WITH mysql_native_password BY ‘canal’;
FLUSH PRIVILEGES;

第一次运行展示一些密钥信息

✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  HC=_XCpqYK4HxxUZvk2A

ℹ️  HTTP CA certificate SHA-256 fingerprint:
  a73741fab03388d99178294397198eb0e767ec9f3d7d9fa73c44131b1bac4d78

ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjEzLjEiLCJhZHIiOlsiMTcyLjE4LjAuMjo5MjAwIl0sImZnciI6ImE3Mzc0MWZhYjAzMzg4ZDk5MTc4Mjk0Mzk3MTk4ZWIwZTc2N2VjOWYzZDdkOWZhNzNjNDQxMzFiMWJhYzRkNzgiLCJrZXkiOiJ4d0V2bzQ0QkVPZE9jU2VDemZIMTpCME1XTE9mX1RmaTgtOUx0ZDNLNWtBIn0=

ℹ️ Configure other nodes to join this cluster:
• Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjEzLjEiLCJhZHIiOlsiMTcyLjE4LjAuMjo5MjAwIl0sImZnciI6ImE3Mzc0MWZhYjAzMzg4ZDk5MTc4Mjk0Mzk3MTk4ZWIwZTc2N2VjOWYzZDdkOWZhNzNjNDQxMzFiMWJhYzRkNzgiLCJrZXkiOiJ5QUV2bzQ0QkVPZE9jU2VDemZIMTpMNURLbURMVFNhaTZVbFpuSF9kUlRBIn0=

  If you're running in Docker, copy the enrollment token and run:
  `docker run -e "ENROLLMENT_TOKEN=<token>" docker.elastic.co/elasticsearch/elasticsearch:8.13.1`

重置密码

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic

访问es

export ELASTIC_PASSWORD="your_password"
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .

curl --cacert http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200
	```

外部访问
一定要用https
`https://localhost:9200`
用户名为elastic,密码为上面的

预期展示

{
“name” : “8557bacd9a05”,
“cluster_name” : “docker-cluster”,
“cluster_uuid” : “-6dsI6fgTYmiuPKWbaCA_A”,
“version” : {
“number” : “8.13.1”,
“build_flavor” : “default”,
“build_type” : “docker”,
“build_hash” : “9287f29bba5e270bd51d557b8daccb7d118ba247”,
“build_date” : “2024-03-29T10:05:29.787251984Z”,
“build_snapshot” : false,
“lucene_version” : “9.10.0”,
“minimum_wire_compatibility_version” : “7.17.0”,
“minimum_index_compatibility_version” : “7.0.0”
},
“tagline” : “You Know, for Search”
}



## kibana

1. Pull the Kibana Docker image.

```bash 
docker pull docker.elastic.co/kibana/kibana:8.13.1
  1. Optional: Verify the Kibana image’s signature.
wget https://artifacts.elastic.co/cosign.pub  
cosign verify --key cosign.pub docker.elastic.co/kibana/kibana:8.13.1
  1. Start a Kibana container.
docker run -d --name kib01 --net es -p 5601:5601 docker.elastic.co/kibana/kibana:8.13.1
  1. When Kibana starts, it outputs a unique generated link to the terminal. To access Kibana, open this link in a web browser.
  2. In your browser, enter the enrollment token that was generated when you started Elasticsearch.

    To regenerate the token, run:

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
  1. Log in to Kibana as the elastic user with the password that was generated when you started Elasticsearch.

    To regenerate the password, run:

docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic

cancal

安装deployer和adapter

wget https://github.com/alibaba/canal/releases/download/canal-1.1.6/canal.deployer-1.1.6.tar.gz

mkdir /home/canal 
tar zxvf canal.deployer-1.1.6.tar.gz  -C /home/canal

wget https://github.com/alibaba/canal/releases/download/canal-1.1.6/canal.adapter-1.1.6.tar.gz

mkdir /home/canal-adapter
tar zxvf canal.adapter-1.1.6.tar.gz -C /home/canal-adapter


配置

deployer

vim conf/example/instance.properties


#--------------------------------
## mysql serverId
canal.instance.mysql.slaveId = 7777
#position info,需要改成自己的数据库信息
canal.instance.master.address = 127.0.0.1:3306 
canal.instance.master.journal.name = 
canal.instance.master.position = 
canal.instance.master.timestamp = 
#canal.instance.standby.address = 
#canal.instance.standby.journal.name =
#canal.instance.standby.position = 
#canal.instance.standby.timestamp = 
#username/password,需要改成自己的数据库信息
canal.instance.dbUsername = canal  
canal.instance.dbPassword = canal
canal.instance.defaultDatabaseName = c2c_groupon
canal.instance.connectionCharset = UTF-8
#table regex
canal.instance.filter.regex = .\*\\\\..\*
#-------------------------------------

adapter

vim conf/application.yml

canal.conf:
	srcDataSources:
	    defaultDS:
	      url: jdbc:mysql://127.0.0.1:3306/c2cgroupon?useUnicode=true
	      username: canal
	      password: canal
	- name: es7
	        hosts: http://127.0.0.1:9200 # 127.0.0.1:9200 for rest mode
	        properties:
	          mode: rest # or rest
	          # security.auth: test:123456 #  only used for rest mode
	          cluster.name: elasticsearch

添加表配置文件

vim conf/es7/collections.yml

dataSourceKey: defaultDS # 源数据源的key, 对应上面配置的srcDataSources中的值
destination: example  # canal的instance或者MQ的topic
groupId: g1 # 对应MQ模式下的groupId, 只会同步对应groupId的数据
esMapping:
  _index: index-c2cgroupon # es 的索引名称
  _id: _id  # es 的_id, 如果不配置该项必须配置下面的pk项_id则会由es自动分配
  sql: "SELECT id AS _id, id, description, is_deleted, mega_spu_ids, priority, seller_id, tag_ids, title, type_id, status, updated_at FROM collection"        # sql映射
  commitBatch: 1   # 提交批大小

启停

/home/canal/bin/startup.sh
/home/canal/bin/stop.sh

/home/canal-adapter/bin/startup.sh
/home/canal-adapter/bin/stop.sh

查看日志

# server
cat /home/canal/logs/canal/canal.log
# instance
cat /home/canal/logs/example/example.log

# server
cat /home/canal-adapter/logs/adapter/adapter.log

!! ==报错==

2024-04-09 10:27:48.978 [pool-2-thread-1] ERROR c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - java.lang.NullPointerException
2024-04-09 10:27:48.979 [Thread-3] ERROR c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - Outer adapter sync failed!

vim conf/es7/collections.yml缺少配置outerAdapterKey

dataSourceKey: defaultDS        # 源数据源的key, 对应上面配置的srcDataSources中的值
outerAdapterKey: es7     # !!!!!对应application.yml中es配置的key 
destination: example            # cannal的instance或者MQ的topic

全量同步

curl -X POST http://127.0.0.1:8081/etl/es7/collection.yml

报错TaskNotFound

conf/application.yml 适配器中添加key

	outerAdapters:
              # - name: logger
      - name: es7                                    
        key: es7   # !!!!!!!!!!!                                   
        hosts: https://127.0.0.1:9200 # 127.0.0.1:9200 for rest mode
        properties:
          mode: rest # or rest
          security.auth: elastic:v1dlevjTMEJITQf2VpOr #  only used for rest mode
          cluster.name: docker-cluster

报错 unable to find valid certification path to requested target

  • 可能因为ca证书没有认证
# 尝试将 es生成的证书注册到jre中
keytool -exportcert -alias [host]-1 -cacerts -storepass password -file [host].cer

但是不知道证书密码,要操作需要自己生成CA证书,故暂放弃

关闭es的安全模式

# 挂载到本地
mkdir es7
mkdir es7/config
mkdir es7/data
mkdir es7/plugins
mkdir es7/logs

echo "# 集群名称  
cluster.name: docker-cluster  
# 节点名称  
node.name: node-1  
# 数据和日志的存储目录  
path.data: /usr/share/elasticsearch/data  
path.logs: /usr/share/elasticsearch/logs  
# 设置绑定的ip,设置为 0.0.0.0 表示监听所有网卡  
network.host: 0.0.0.0  
# 端口  
http.port: 9200  
# 设置在集群为单结点模式  
discovery.type: single-node  
#indices.fielddata.cache.size: 50%
xpack.security.enabled: false
xpack.security.enrollment.enabled: true" > es7/config/elasticsearch.yml

docker run -d --name es01 --net es -p 9200:9200 -p 9300:9300 -it -m 1GB -v /home/es7/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/es7/data:/usr/share/elasticsearch/data -v /home/es7/logs:/usr/share/elasticsearch/logs -v /home/es7/plugins:/usr/share/elasticsearch/plugins -e "TAKE_FILE_OWNERSHIP=true" --privileged=true docker.elastic.co/elasticsearch/elasticsearch:8.13.1
  • TAKE_FILE_OWNERSHIP=true:使用文件挂载之后,容器可能没有访问宿主机的权限,设置该环境变量,可以解决该问题

报错: /usr/share/elasticsearch/config/elasticsearch.yml: Device or resource busy

添加了配置 xpack.security.enabled: true

然后又报错:Error opening log file ‘logs/gc.log’: Permission denied

权限修改之后就好了

chmod 777 -R /home/es7/logs
chmod 777 -R /home/es7/data
chmod 777 -R /home/es7/plugins

kibana 挂载配置文件

mkdir -p /home/kibana/config
mkdir -p /home/kibana/data

chmod 777 -R /home/kibana/config /home/kibana/data

echo '# Default Kibana configuration for docker target  
server.host: "0.0.0.0"  
server.shutdownTimeout: "5s"  
elasticsearch.hosts: [ "http://elasticsearch:9200" ]  
monitoring.ui.container.elasticsearch.enabled: true' > /home/kibana/config/kibana.yml

echo '## Node command line options
## See `node --help` and `node --v8-options` for available options
## Please note you should specify one option per line

## max size of old space in megabytes
#--max-old-space-size=4096

## do not terminate process on unhandled promise rejection
 --unhandled-rejections=warn

## restore < Node 16 default DNS lookup behavior
--dns-result-order=ipv4first

## enable OpenSSL 3 legacy provider
# --openssl-legacy-provider' > /home/kibana/config/node.options.yml
  
docker run --name kib01 --net es -p 5601:5601 -v /home/kibana/config:/usr/share/kibana/config -v /home/kibana/data:/usr/share/kibana/data --privileged=true -d docker.elastic.co/kibana/kibana:8.13.1
  • 如果要关闭安全模式,需要修改node的配置文件[Use Kibana in a production environment Kibana Guide [8.13] Elastic](https://www.elastic.co/guide/en/kibana/8.13/production.html#openssl-legacy-provider)

其实不用关闭,输入token跟这个无关,应该用 manual config 使用账号密码链接

报错: 连接es超时

未能处理 暂时不管了

重新使用canal同步

cannot write xcontent for unknown value of type class java.sql.Timestamp

原因:字段名没对应

java.lang.NumberFormatException: For input string: “2024-04-11 10:28:38.0”

使用 unix_timestamp()函数将日期格式转换为时间戳

终于成功了!!!

Pasted image 20240411103611.png