As a Lead Software Engineer in Pune, you will be responsible for leading the development and enhancement of a key management system built using Java 21 with a hexagonal architecture and multiple microservices integration. Your role requires expertise in modern programming languages, frameworks, and best practices to ensure secure, efficient, and scalable software delivery. Your responsibilities include overseeing code quality, enforcing development standards, and guiding a team of engineers in applying cutting-edge solutions. You will work on REST interfaces, Kafka messaging, and have knowledge of cryptographic operations and Hardware Security Modules. The system's primary database is Postgres, necessitating expertise in database management and optimization. Your tasks will involve leading, designing, developing, and testing Web and Cloud-native applications. You will collaborate with Software Engineers and Product Managers to architect web and cloud platforms, own end-to-end architectural assessments, and define epics/features while working in a cross-functional agile team to deliver working software incrementally. Additionally, you will mentor team members, participate in interviews, research new technologies, and maintain code quality. In terms of development practices, you will define and coach the team on best practices, champion test-driven development, and enforce coding guidelines and scanning rules. You will conduct technical reviews, promote design patterns, and architectural best practices in a microservices environment. Your technical leadership will involve designing microservices, building RESTful APIs, and managing asynchronous messaging systems using Kafka. To be successful in this role, you should have 10+ years of experience in microservices-based cloud-native development, 3+ years of team leadership experience, hands-on experience with Java, expertise in Spring Boot, Hibernate, and related technologies, knowledge of hexagonal architecture, and experience with Postgres, Kafka, Redis, and Restful APIs. Proficiency in container & serverless architectures, test-driven development, DevOps, source control management, and strong problem-solving skills are essential. Stand out by showcasing hands-on experience with cryptographic operations, key management systems, familiarity with Scrum/Agile methodologies, self-starter mindset, global communication skills, leadership qualities, eagerness to share knowledge, continuous learning mindset, and a desire to contribute to the growth of software development and team leadership.,
5+ Years of experience in building or implementing requirements on any Zoho application Must have exp as Team Lead Designed user groups, attributed roles and access rights and defined sharing and viewing rules. Designing of organization processes in ZOHO one application using Workflow Rules, Schedules, Actions, Assignment Rules, Case Escalation Rules, Scoring Rules, Marketing Attribution, and Segmentation. Customized and configured Zoho One applications and developed custom function in Deluge. Experienced in integration of Zoho across internal database & other resources. Experience with 3rd party integration. Handle all basic administrative functions including user account maintenance, reports, dashboards other routine tasks. Experienced in design & development of Zoho Creator Applications. Experience & Proficiency with Zoho Deluge required. Good problem-solving skill.
Job Title: Python Developer Location: Jaipur, India Job description Role Overview Build and operate a scalable, headless browser automation service that posts owner replies to reviews across multiple publishers (e.g., Yelp, Google, TripAdvisor). The system ingests JSON jobs, logs in with provided credentials, finds the target review, posts the response, and reports outcomes — reliably and at scale. Core Responsibilities Automation adapters: Implement Playwright/Selenium flows per publisher (login, navigate to location_url, fuzzy-match review, post reply, verify). Job orchestration: Use Celery (Redis broker) for distributed workers; Celery Beat for scheduled tasks (retries, health checks, DOM drift probes). Scalability & throughput: Configure worker pools & autoscaling to comfortably handle 10–100+ concurrent jobs . Rate-limit per publisher/domain; backoff on throttling. Idempotency (dedupe by publisher_identifier + location_identifier + review_identifier). Observability: Emit structured logs/metrics/traces; Datadog dashboards + alerts (success rate, latency, error taxonomy, captcha/MFA rate). Blockers & challenges: Detect captchas/MFA/layout changes; implement human-in-the-loop resolution (no bypassing); resilient locators/waits. Reliability: Health checks, circuit breakers, canary releases, and feature flags to disable a publisher adapter quickly. Required Experience 3+ years building production browser automation (Playwright preferred) for authenticated flows. Strong Python (async, typing, testing) and Celery + Redis expertise (routing, acks/retries, ETA/Countdown, result backends). AWS (ECS/EKS or EC2, S3 for artifacts, CloudWatch/ALB, IAM) and Docker . Datadog (metrics, logs, APM traces, monitors, SLOs). Practical handling of rate limits, CAPTCHA, MFA , session lifecycle, and SRF. Nice to Have Fuzzy matching/text similarity (e.g., rapidfuzz) for robust review matching. Terraform/IaC; blue/green or canary deploys. Postgres for job/state storage (unique constraints for dedupe). Minimal Target Architecture API (FastAPI) receives JSON job → validates schema → enqueues Celery task (Redis broker). Celery workers (Docker) run Playwright in headless mode; store artifacts (S3); emit metrics/logs (Datadog). Celery Beat schedules DOM-probe jobs, reprocessing, and key rotation checks. Result sink (Postgres/Redis/S3) stores typed outcome: status, error_code, timestamps, artifact URIs. Feature flags (env/DB) to enable/disable publisher adapters instantly. Core Technical Skills Browser Automation Strong hands-on with Playwright (preferred) or Selenium for headless automation. Experience handling logged-in user flows (authentication, sessions, CSRF tokens, cookies). Robust DOM locator strategies (ARIA roles, CSS/XPath, page objects). Ability to manage dynamic pages , lazy loads, and infinite scrolls. Captcha/MFA detection and human-in-the-loop integration (not bypass). Backend & Job Orchestration Python (async/await, typing, testing, logging). Celery (task queues, retries, routing, scheduling with Celery Beat). Redis (as Celery broker & caching layer). Designing idempotent and fault-tolerant job pipelines . Scalability & Systems Design Architecting distributed worker pools that scale for 10–100+ concurrent jobs . Rate limiting, throttling, and backoff strategies. Experience with message queues and asynchronous task execution. Familiarity with multi-tenant architectures (per-publisher adapters). Cloud & Infrastructure AWS : ECS/EKS or EC2 for containerized workers. S3 for artifact storage (screenshots, logs, videos). Secrets Manager/KMS for secure credential handling. CloudWatch/ALB for monitoring/logging. Docker : building, running, and deploying containerized workers. CI/CD pipelines for automation deployments. Observability & Monitoring Datadog (APM, metrics, logs, custom dashboards, alerting). OpenTelemetry/structured logging for traceability. Error taxonomy and health monitoring for automation jobs. Security & Compliance Secure storage and rotation of credentials (Vault/Secrets Manager). Encrypted communication between services. Audit trails (job logs, screenshots, structured events). Data & Matching Fuzzy text matching (rapidfuzz/fuzzywuzzy, token set ratio) to identify reviews reliably. String normalization (case, whitespace, punctuation, emoji handling). Postgres/MySQL basics for job state/result persistence.