celery 与redis - unix超时

vd2z7a6w  于 7个月前  发布在  Redis
关注(0)|答案(3)|浏览(127)

我有一个应用程序使用celery作为其代理任务,我使用redis作为其代理和结果后端,我设置redis使用unix套接字。

brok = 'redis+socket://:ABc@/tmp/redis.sock'
app = Celery('NTWBT', backend=brok, broker=brok)
app.conf.update(
    BROKER_URL=brok,
    BROKER_TRANSPORT_OPTIONS={
        "visibility_timeout": 3600
    },
    CELERY_RESULT_BACKEND=brok,
    CELERY_ACCEPT_CONTENT=['pickle', 'json', 'msgpack', 'yaml'],
)

字符串
但每次我添加一个工作celery 给我这个错误

Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 283, in trace_task
    uuid, retval, SUCCESS, request=task_request,
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 257, in store_result
    request=request, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 491, in _store_result
    self.set(self.get_key_for_task(task_id), self.encode(meta))
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 160, in set
    return self.ensure(self._set, (key, value), **retry_policy)
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 149, in ensure
    **retry_policy
  File "/usr/local/lib/python2.7/dist-packages/kombu/utils/__init__.py", line 246, in retry_over_time
    return fun(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 169, in _set
    pipe.execute()
  File "/usr/local/lib/python2.7/dist-packages/redis/client.py", line 2620, in execute
    self.shard_hint)
  File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 897, in get_connection
    connection = self.make_connection()
  File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 906, in make_connection
    return self.connection_class(**self.connection_kwargs)
TypeError: __init__() got an unexpected keyword argument 'socket_connect_timeout'


我应该使用哪个选项为celery donot设置超时为它的redis连接?

uurv41yg

uurv41yg1#

在我的情况下的问题是,我的计算机的IP被阻止从服务器上的端口.在允许TCP连接通过这个端口从我的本地计算机后,celery 可以再次连接到后端.
除此之外,下面的一些celery设置可能有助于处理超时(您可以在celery documentation中阅读更多关于它们的信息)。

# celery broker connection timouts and retries
broker_connection_retry = True  # Retries connecting to the broker
broker_connection_retry_on_startup = True  # Important as the worker is restarted after every task
broker_connection_max_retries = 10  # Maximum number of retries to establish a connection to the broker
broker_connection_timeout = 30  # Default timeout in s before timing out the connection to the AMQP server, default 4.0
broker_pool_limit = None  # connection pool is disabled and connections will be established / closed for every use

BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 36000,  # Increase time before tasks time out
                            'max_retries': 10,  # The max number of retries passed to kombu package
                            "interval_start": 0,  # Time in seconds when a retry is started
                            "interval_step": 60,  # The number of seconds it waits more with every retry
                            "interval_max": 600,  # The maximum number of seconds it waits for a retry
                            "retry_policy": {'timeout': 60.0},  # Increase timeout for connections to backend
                            }

result_backend_transport_options = {'visibility_timeout': 36000,  # Increase time before tasks time out
                                    'max_retries': 10,  # The max number of retries passed to kombu package
                                    "interval_start": 0,  # Time in seconds when a retry is started
                                    "interval_step": 60,  # The number of seconds it waits more with every retry
                                    "interval_max": 600,  # The maximum number of seconds it waits for a retry
                                    "retry_policy": {'timeout': 60.0},  # Increase timeout for connections to backend
                                    }

# Redis connection settings
redis_socket_timeout = 300
redis_socket_connect_timeout = 300  # Timeout for redis socket connections
redis_socket_keepalive = True 
redis_retry_on_timeout = True  # Not recommended for unix sockets
task_reject_on_worker_lost = True  # Retry the task if the worker is killed

# Handling of timeouts 
result_persistent = True  # Store results so they don't get lost, when the broker is restarted
worker_deduplicate_successful_tasks = True
worker_cancel_long_running_tasks_on_connection_loss = False
worker_proc_alive_timeout = 300  # The timeout in seconds (int/float) when waiting for a new worker process to start up.

字符串

wz3gfoph

wz3gfoph2#

似乎这个问题与你系统上安装的一个redis服务器版本有关,socket_connect_timeout是在redis 2.10.0中首次引入的。
所以你需要更新你的Redis版本
如果你在ubuntu服务器上运行,你可以安装官方的apt仓库:

$ sudo apt-get install -y python-software-properties
$ sudo add-apt-repository -y ppa:rwky/redis
$ sudo apt-get update
$ sudo apt-get install -y redis-server

字符串
并更新到celery 的最新版本。
这是celery中的github问题,因为不仅您会遇到此问题:https://github.com/celery/celery/issues/2903
如果一切都不适合你,我建议使用rabbitmq而不是Redis:

$ sudo apt-get install rabbitmq-server
$ sudo pip install librabbitmq


然后在你的应用中使用这个CELERY_BROKER_URL配置celery:

'amqp://guest:guest@localhost:5672//'


我希望这个答案能满足你的所有需求。干杯

zour9fqk

zour9fqk3#

有几个库中的错误导致Celery中出现此异常:

如果你使用Redis和UNIX套接字作为代理,还没有简单的修复方法。除非你对celerykombu和/或redis-py库进行猴子补丁。
现在,我建议你使用Redis与TCP连接,或者切换到另一个代理,例如RabbitMQ。

相关问题