Docker方式部署并启动后,浏览器请求不到

启动命令

docker run --name maxkb --restart=always --privileged=true -p 8080:8080 -v /usr/docker/maxkb/data:/var/lib/postgresql/data -v /usr/docker/maxkb/python-packages:/opt/maxkb/app/sandbox/python-packages -d cr2.fit2cloud.com/1panel/maxkb:latest

启动日志

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Asia/Shanghai
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok


Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start

initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2025-02-08 14:33:32.825 CST [50] LOG:  starting PostgreSQL 15.8 (Debian 15.8-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2025-02-08 14:33:32.839 CST [50] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2025-02-08 14:33:32.844 CST [53] LOG:  database system was shut down at 2025-02-08 14:33:31 CST
2025-02-08 14:33:32.853 CST [50] LOG:  database system is ready to accept connections
 done
server started
CREATE DATABASE


/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init.sql
CREATE DATABASE
You are now connected to database "maxkb" as user "root".
CREATE EXTENSION


waiting for server to shut down....2025-02-08 14:33:33.408 CST [50] LOG:  received fast shutdown request
2025-02-08 14:33:33.423 CST [50] LOG:  aborting any active transactions
2025-02-08 14:33:33.432 CST [50] LOG:  background worker "logical replication launcher" (PID 56) exited with exit code 1
2025-02-08 14:33:33.432 CST [51] LOG:  shutting down
2025-02-08 14:33:33.433 CST [51] LOG:  checkpoint starting: shutdown immediate
2025-02-08 14:33:33.998 CST [51] LOG:  checkpoint complete: wrote 1854 buffers (11.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.385 s, sync=0.178 s, total=0.566 s; sync files=597, longest=0.156 s, average=0.001 s; distance=8797 kB, estimate=8797 kB
2025-02-08 14:33:34.013 CST [50] LOG:  database system is shut down
 done
server stopped

PostgreSQL init process complete; ready for start up.

2025-02-08 14:33:34.163 CST [8] LOG:  starting PostgreSQL 15.8 (Debian 15.8-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2025-02-08 14:33:34.165 CST [8] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2025-02-08 14:33:34.165 CST [8] LOG:  listening on IPv6 address "::", port 5432
2025-02-08 14:33:34.172 CST [8] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2025-02-08 14:33:34.177 CST [69] LOG:  database system was shut down at 2025-02-08 14:33:33 CST
2025-02-08 14:33:34.553 CST [8] LOG:  database system is ready to accept connections
127.0.0.1:5432 - accepting connections
Building prefix dict from the default dictionary ...
DEBUG:jieba:Building prefix dict from the default dictionary ...
Dumping model to file cache /tmp/jieba.cache
DEBUG:jieba:Dumping model to file cache /tmp/jieba.cache
Loading model cost 1.007 seconds.
DEBUG:jieba:Loading model cost 1.007 seconds.
Prefix dict has been built successfully.
DEBUG:jieba:Prefix dict has been built successfully.
Operations to perform:
  Apply all migrations: application, contenttypes, dataset, django_apscheduler, django_celery_beat, embedding, function_lib, setting, users
Running migrations:
<<...初始化脚本略过...>>
  Applying users.0001_initial... OK
  Applying setting.0001_initial... OK
  Applying setting.0002_systemsetting... OK
  Applying setting.0003_model_meta_model_status... OK
  Applying setting.0004_alter_model_credential... OK
  Applying setting.0005_model_permission_type... OK
  Applying setting.0006_alter_model_status... OK
  Applying dataset.0001_initial... OK
  Applying application.0001_initial... OK
  Applying application.0002_chat_client_id... OK
  Applying application.0003_application_icon... OK
  Applying application.0004_applicationaccesstoken_show_source... OK
  Applying application.0005_alter_chat_abstract_alter_chatrecord_answer_text... OK
  Applying application.0006_applicationapikey_allow_cross_domain_and_more... OK
  Applying application.0007_alter_application_prologue... OK
  Applying application.0008_chat_is_deleted... OK
  Applying application.0009_application_type_application_work_flow_and_more... OK
  Applying application.0010_alter_chatrecord_details... OK
  Applying application.0011_application_model_params_setting... OK
  Applying application.0012_application_stt_model_application_stt_model_enable_and_more... OK
  Applying application.0013_application_tts_type... OK
  Applying application.0014_application_problem_optimization_prompt... OK
  Applying application.0015_re_database_index... OK
  Applying application.0016_alter_chatrecord_problem_text... OK
  Applying application.0017_application_tts_model_params_setting... OK
  Applying application.0018_workflowversion_name... OK
  Applying application.0020_application_record_update_time... OK
  Applying application.0021_applicationpublicaccessclient_client_id_and_more... OK
  Applying contenttypes.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying dataset.0002_image... OK
  Applying dataset.0003_document_hit_handling_method... OK
  Applying dataset.0004_document_directly_return_similarity... OK
  Applying dataset.0005_file... OK
  Applying dataset.0006_dataset_embedding_mode... OK
  Applying dataset.0007_alter_paragraph_content... OK
  Applying dataset.0008_alter_document_status_alter_paragraph_status... OK
  Applying dataset.0009_alter_document_status_alter_paragraph_status... OK
  Applying dataset.0010_file_meta... OK
  Applying dataset.0011_document_status_meta_paragraph_status_meta_and_more... OK
  Applying django_apscheduler.0001_initial... OK
  Applying django_apscheduler.0002_auto_20180412_0758... OK
  Applying django_apscheduler.0003_auto_20200716_1632... OK
  Applying django_apscheduler.0004_auto_20200717_1043... OK
  Applying django_apscheduler.0005_migrate_name_to_id... OK
  Applying django_apscheduler.0006_remove_djangojob_name... OK
  Applying django_apscheduler.0007_auto_20200717_1404... OK
  Applying django_apscheduler.0008_remove_djangojobexecution_started... OK
  Applying django_apscheduler.0009_djangojobexecution_unique_job_executions... OK
  Applying django_celery_beat.0001_initial... OK
  Applying django_celery_beat.0002_auto_20161118_0346... OK
  Applying django_celery_beat.0003_auto_20161209_0049... OK
  Applying django_celery_beat.0004_auto_20170221_0000... OK
  Applying django_celery_beat.0005_add_solarschedule_events_choices... OK
  Applying django_celery_beat.0006_auto_20180322_0932... OK
  Applying django_celery_beat.0007_auto_20180521_0826... OK
  Applying django_celery_beat.0008_auto_20180914_1922... OK
  Applying django_celery_beat.0006_auto_20180210_1226... OK
  Applying django_celery_beat.0006_periodictask_priority... OK
  Applying django_celery_beat.0009_periodictask_headers... OK
  Applying django_celery_beat.0010_auto_20190429_0326... OK
  Applying django_celery_beat.0011_auto_20190508_0153... OK
  Applying django_celery_beat.0012_periodictask_expire_seconds... OK
  Applying django_celery_beat.0013_auto_20200609_0727... OK
  Applying django_celery_beat.0014_remove_clockedschedule_enabled... OK
  Applying django_celery_beat.0015_edit_solarschedule_events_choices... OK
  Applying django_celery_beat.0016_alter_crontabschedule_timezone... OK
  Applying django_celery_beat.0017_alter_crontabschedule_month_of_year... OK
  Applying django_celery_beat.0018_improve_crontab_helptext... OK
  Applying django_celery_beat.0019_alter_periodictasks_options... OK
  Applying embedding.0001_initial... OK
  Applying embedding.0002_embedding_search_vector... OK
  Applying embedding.0003_alter_embedding_unique_together... OK
  Applying users.0002_user_create_time_user_update_time... OK
  Applying users.0003_user_source... OK
  Applying users.0004_alter_user_email... OK
  Applying function_lib.0001_initial... OK
  Applying function_lib.0002_functionlib_is_active_functionlib_permission_type... OK
  Applying setting.0007_model_model_params_form... OK
  Applying setting.0008_modelparam... OK
  Applying setting.0009_set_default_model_params_form... OK

- Start Celery as Distributed Task Queue: Celery

- Start Gunicorn Local Model WSGI HTTP Server

- Start Gunicorn WSGI HTTP Server
2025-02-08 14:34:04 Check service status: celery_default -> running at 83
2025-02-08 14:34:05 Check service status: local_model -> running at 84
2025-02-08 14:34:06 Check service status: gunicorn -> running at 85
2025-02-08 14:34:37 Check service status: celery_default -> running at 83
2025-02-08 14:34:38 Check service status: local_model -> running at 84
2025-02-08 14:34:39 Check service status: gunicorn -> running at 85
2025-02-08 14:35:10 Check service status: celery_default -> running at 83
2025-02-08 14:35:11 Check service status: local_model -> running at 84
2025-02-08 14:35:12 Check service status: gunicorn -> running at 85
2025-02-08 14:35:43 Check service status: celery_default -> running at 83
2025-02-08 14:35:44 Check service status: local_model -> running at 84
2025-02-08 14:35:45 Check service status: gunicorn -> running at 85
2025-02-08 14:36:16 Check service status: celery_default -> running at 83
2025-02-08 14:36:17 Check service status: local_model -> running at 84
2025-02-08 14:36:18 Check service status: gunicorn -> running at 85
2025-02-08 14:36:49 Check service status: celery_default -> running at 83
2025-02-08 14:36:50 Check service status: local_model -> running at 84
2025-02-08 14:36:51 Check service status: gunicorn -> running at 85
2025-02-08 14:37:22 Check service status: celery_default -> running at 83
2025-02-08 14:37:23 Check service status: local_model -> running at 84
2025-02-08 14:37:24 Check service status: gunicorn -> running at 85
2025-02-08 14:37:55 Check service status: celery_default -> running at 83
2025-02-08 14:37:56 Check service status: local_model -> running at 84
2025-02-08 14:37:57 Check service status: gunicorn -> running at 85

浏览器访问http://ip:8080,一直没有响应,在服务器访问本机以及进入容器内部curl访问8080同样的没有响应,是不是我哪一步有什么问题?

服务器的防火墙已关。

这是在Linux环境部署的吗?

我也是在openEuler v22.03 (LTS-SP3)系统服务器上安装完后,发现自动创建的dataease和mysql8.4.0的docker镜像没有启动,然后发现是默认自动连外网下载镜像,就手动从安装包中加载了这两个镜像,然后只能启动dataease镜像,启动不了mysql,然后网页还是访问不了localhost:8100。
安装时的提示:
[root@lowair-app64 installer-v2.10.5]# /bin/bash install.sh
当前时间 : Mon Feb 10 11:30:07 AM CST 2025

  1. 检查安装环境并初始化环境变量
    全新安装
  2. 设置运行目录
    运行目录 /usr/local/dataease/dataease2.0
    配置文件目录 /usr/local/dataease/dataease2.0/conf
  3. 初始化运行目录
    复制安装文件到运行目录
    调整配置文件参数
  4. 安装 dectl 命令行工具
    安装至 /usr/local/bin/dectl & /usr/bin/dectl
  5. 修改操作系统相关设置
    关闭 SELINUX
    开启防火墙端口 8100
    success
    success
  6. 安装 docker
    检测到 Docker 已安装,跳过安装步骤
    启动 Docker
  7. 安装 docker-compose
    离线安装 docker-compose
    docker-compose 安装成功
  8. 加载 DataEase 镜像
    加载镜像 dataease_v2.10.5
    加载镜像 mysql_8.4.0
  9. 配置 DataEase 服务
    配置 dataease Service
    配置开机自启动
  10. 启动 DataEase 服务
    Job for dataease.service failed because the control process exited with error code.
    See “systemctl status dataease.service” and “journalctl -xeu dataease.service” for details.
    ======================= 安装完成 =======================
    问题:最后这两句意思是dataease.service的作业失败,因为控制进程退出并返回错误代码。
    有关详细信息,请参阅“systemctl状态dataease.service”和“journalctl-xeu dataease.service”。

DateEase么?这是MaxKB交流论坛,下面是DataEase论坛 :rofl:

谢谢,我以为都能提问题

是的,CentOS7.9,新的服务器。

你的服务器配置是多少呢?

1U 8G 100G

CPU至少要4C才能流畅运行~

这个问题跟我的一模一样啊,你是怎么解决的,我的配置检查过了是符合要求的