Hangfire - 无法安排重复作业,请参阅内部异常了解详细信息

问题描述 投票:0回答:4

我有一个应用程序;它位于三个不同的服务器上,使用负载均衡器进行用户分配。 该应用程序使用自己的队列,我为作业添加了一个过滤器,以保留其

original
队列,以防它们在某些时候失败。但话又说回来,它仍然表现得就像应用程序没有运行一样。错误如下;

System.InvalidOperationException: Recurring job can't be scheduled, see inner exception for details.
 ---> Hangfire.Common.JobLoadException: Could not load the job. See inner exception for the details.
 ---> System.IO.FileNotFoundException: Could not resolve assembly 'My.Api'.
   at System.TypeNameParser.ResolveAssembly(String asmName, Func`2 assemblyResolver, Boolean throwOnError, StackCrawlMark& stackMark)
   at System.TypeNameParser.ConstructType(Func`2 assemblyResolver, Func`4 typeResolver, Boolean throwOnError, Boolean ignoreCase, StackCrawlMark& stackMark)
   at System.TypeNameParser.GetType(String typeName, Func`2 assemblyResolver, Func`4 typeResolver, Boolean throwOnError, Boolean ignoreCase, StackCrawlMark& stackMark)
   at System.Type.GetType(String typeName, Func`2 assemblyResolver, Func`4 typeResolver, Boolean throwOnError)
   at Hangfire.Common.TypeHelper.DefaultTypeResolver(String typeName)
   at Hangfire.Storage.InvocationData.DeserializeJob()
   --- End of inner exception stack trace ---
   at Hangfire.Storage.InvocationData.DeserializeJob()
   at Hangfire.RecurringJobEntity..ctor(String recurringJobId, IDictionary`2 recurringJob, ITimeZoneResolver timeZoneResolver, DateTime now)
   --- End of inner exception stack trace ---
   at Hangfire.Server.RecurringJobScheduler.ScheduleRecurringJob(BackgroundProcessContext context, IStorageConnection connection, String recurringJobId, RecurringJobEntity recurringJob, DateTime now)
What can be the issue here? The apps are running. And once I trigger the recurring jobs, they are good to go, until they show the above error.

这是我的AppStart 文件;

private IEnumerable<IDisposable> GetHangfireServers()
{
    Hangfire.GlobalConfiguration.Configuration
        .SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
        .UseSimpleAssemblyNameTypeSerializer()
        .UseRecommendedSerializerSettings()
        .UseSqlServerStorage(HangfireServer, new SqlServerStorageOptions
        {
            CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
            SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
            QueuePollInterval = TimeSpan.Zero,
            UseRecommendedIsolationLevel = true,
            DisableGlobalLocks = true
        });

    yield return new BackgroundJobServer(new BackgroundJobServerOptions {
        Queues = new[] { "myapp" + GetEnvironmentName() },
        ServerName = "MyApp" + ConfigurationHelper.GetAppSetting("Environment")
    });
}

public void Configuration(IAppBuilder app)
{
    var container = new Container();
    container.Options.DefaultScopedLifestyle = new AsyncScopedLifestyle();
    
    RegisterTaskDependencies(container);
    container.RegisterWebApiControllers(System.Web.Http.GlobalConfiguration.Configuration);
    container.Verify();
    
    var configuration = new HttpConfiguration();
    configuration.DependencyResolver = new SimpleInjectorWebApiDependencyResolver(container);
    
    /* HANGFIRE CONFIGURATION */
    if (Environment == "Production")
    {
        GlobalJobFilters.Filters.Add(new PreserveOriginalQueueAttribute());
        Hangfire.GlobalConfiguration.Configuration.UseActivator(new SimpleInjectorJobActivator(container));
        Hangfire.GlobalConfiguration.Configuration.UseLogProvider(new Api.HangfireArea.Helpers.CustomLogProvider(container.GetInstance<Core.Modules.LogModule>()));
        app.UseHangfireAspNet(GetHangfireServers);
        app.UseHangfireDashboard("/hangfire", new DashboardOptions
        {
            Authorization = new[] { new DashboardAuthorization() },
            AppPath = GetBackToSiteURL(),
            DisplayStorageConnectionString = false
        });
        AddOrUpdateJobs();
    }
    /* HANGFIRE CONFIGURATION */
    
    app.UseWebApi(configuration);
    
    WebApiConfig.Register(configuration);

}

public static void AddOrUpdateJobs()
{
    var queueName = "myapp" + GetEnvironmentName();
    RecurringJob.AddOrUpdate<HangfireArea.BackgroundJobs.AttachmentCreator>(
         "MyApp_MyTask",
         (service) => service.RunMyTask(), 
      "* * * * *", queue: queueName, timeZone: TimeZoneInfo.FindSystemTimeZoneById("Turkey Standard Time"));
}

这里可能出现什么问题?

hangfire
4个回答
5
投票

事实证明,当多个应用程序使用相同的

sql schema
时,Hangfire 本身并不能很好地工作。为了解决这个问题,我使用了Hangfire.MAMQSqlExtension。它是一个第三方扩展,但存储库表示它得到了 Hangfire 的正式认可。 如果您对多个应用程序使用相同的架构,则必须在您的所有应用程序中使用此扩展,否则您将面临上述错误。

如果您的应用程序同时存在不同版本(例如生产测试开发),则此应用程序本身无法完全处理失败的作业。如果作业失败,常规 Hangfire 将不会尊重其原始队列,因此会将其移动到

default
队列。如果您的应用程序仅适用于您应用程序的队列,或者
default
队列是共享的,这最终会产生问题。为了解决这个问题,为了强制 Hangfire 尊重原始队列属性,我使用了 this 解决方案。效果很好,您可以根据您的
web.config
appsettings.json
来命名应用程序的队列。

我之前的回答不知为何被删除了?这样问题就解决了,没有其他办法了。对于遇到此问题的人,请勿删除答案。


0
投票

我发现的另一个选择是使用 Hangfire 的后台进程 https://www.hangfire.io/overview.html#background-process.

public class CleanTempDirectoryProcess : IBackgroundProcess
{
    public void Execute(BackgroundProcessContext context)
    {
        Directory.CleanUp(Directory.GetTempDirectory());
        context.Wait(TimeSpan.FromHours(1));
    }
}

并设置延迟。这为我解决了这个问题,因为我需要重复运行该作业。我不确定这可能对仪表板产生什么影响。


0
投票

您可以创建作业过滤器,通过放置队列来执行与重试相同的操作。

不同之处在于您迫不及待地想要运行作业。它将立即运行。

public class AutomaticRetryQueueAttribute : JobFilterAttribute, IApplyStateFilter, IElectStateFilter
{
    private string queue;
    private int attempts;
    private readonly object _lockObject = new object();

    private readonly ILog _logger = LogProvider.For<AutomaticRetryQueueAttribute>();

    public AutomaticRetryQueueAttribute(int Attempts = 10, string Queue = "Default")
    {
        queue = Queue;
        attempts = Attempts;
    }

    public int Attempts
    {
        get { lock (_lockObject) { return attempts; } }
        set
        {
            if (value < 0)
            {
                throw new ArgumentOutOfRangeException(nameof(value), @"Attempts value must be equal or greater than zero.");
            }

            lock (_lockObject)
            {
                attempts = value;
            }
        }
    }

    public string Queue
    {
        get { lock (_lockObject) { return queue; } }
        set
        {
            lock (_lockObject)
            {
                queue = value;
            }
        }
    }

    public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
    {
        var newState = context.NewState as EnqueuedState;
        if (!string.IsNullOrWhiteSpace(queue) && newState != null && newState.Queue != Queue)
        {
            newState.Queue = String.Format(Queue, context.BackgroundJob.Job.Args.ToArray());
        }

        if ((context.NewState is ScheduledState || context.NewState is EnqueuedState) &&
            context.NewState.Reason != null &&
            context.NewState.Reason.StartsWith("Retry attempt"))
        {
            transaction.AddToSet("retries", context.BackgroundJob.Id);
        }
    }

    public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
    {
        if (context.OldStateName == ScheduledState.StateName)
        {
            transaction.RemoveFromSet("retries", context.BackgroundJob.Id);
        }
    }

    public void OnStateElection(ElectStateContext context)
    {
        var failedState = context.CandidateState as FailedState;
        if (failedState == null)
        {
            // This filter accepts only failed job state.
            return;
        }

        var retryAttempt = context.GetJobParameter<int>("RetryCount") + 1;

        if (retryAttempt <= Attempts)
        {
            ScheduleAgainLater(context, retryAttempt, failedState);
        }
        else
        {
            _logger.ErrorException($"Failed to process the job '{context.BackgroundJob.Id}': an exception occurred.", failedState.Exception);
        }
    }

    private void ScheduleAgainLater(ElectStateContext context, int retryAttempt, FailedState failedState)
    {
        context.SetJobParameter("RetryCount", retryAttempt);

        const int maxMessageLength = 50;
        var exceptionMessage = failedState.Exception.Message.Length > maxMessageLength
            ? failedState.Exception.Message.Substring(0, maxMessageLength - 1) + "…"
            : failedState.Exception.Message;

        // If attempt number is less than max attempts, we should
        // schedule the job to run again later.

        var reason = $"Retry attempt {retryAttempt} of {Attempts}: {exceptionMessage}";

        context.CandidateState = (IState)new EnqueuedState { Reason = reason };

        if (context.CandidateState is EnqueuedState enqueuedState)
        {
            enqueuedState.Queue = String.Format(Queue, context.BackgroundJob.Job.Args.ToArray());
        }

        _logger.WarnException($"Failed to process the job '{context.BackgroundJob.Id}': an exception occurred. Retry attempt {retryAttempt} of {Attempts} will be performed.", failedState.Exception);
    }
}

-1
投票
  1. 删除旧的hangfire数据库并使用新名称重新创建数据库
  2. 或者使用内存存储方法(UseInMemoryStorage)
© www.soinside.com 2019 - 2024. All rights reserved.