Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
5 - 9 Lacs
Bengaluru
On-site
Rystad Energy is a leading global independent research and energy intelligence company dedicated to helping clients navigate the future of energy. By providing high-quality data and thought leadership, our international team empowers businesses, governments and organizations to make well-informed decisions. Our extensive portfolio of products and solutions covers all aspects of global energy fundamentals, spanning every corner of the oil and gas industry, renewables, clean technologies, supply chain and power markets. Headquartered in Oslo, Norway, with an expansive global network, our data, analysis, advisory and education services provide clients a competitive edge in the market. For more information, visit www.rystadenergy.com. Role Our DevOps Engineers play a key role in designing and implementing robust CI/CD pipelines, creating and optimizing our infrastructure, and developing the applications and services that support and enhance our platform. They also improve our Internal Developer Platform (IDP), ensuring secure, scalable, and seamless deployments to meet the demands of our global operations. If you’re passionate about driving innovation in the cloud, building resilient infrastructure, and developing powerful supporting applications, we’d love to have you on board. Requirements Required Skills Proficient in English, both spoken and written Experience using Infrastructure as Code (IAC) Skilled in writing and operating applications and services Excellent communication and collaboration skills Familiar with GitOps practices and tools, such as ArgoCD or Flux Experience in hybrid infrastructure setups (on-prem + cloud) Able to troubleshoot and resolve issues related to infrastructure, networking, deployments, applications, and performance Capable of working independently and taking ownership of assigned projects and tasks A proactive, independent mindset with a passion for learning and collaboration Knowledgeable in security best practices and able to implement them in a DevOps context Preferred Skills 5-8 years of experience in software development, with at least 3-5 years in a DevOps role Azure Kubernetes Terraform Ansible Helm ArgoCD Good scripting skills (Bash, PowerShell, or Python is a plus) Linux Docker Databases (MSSQL, Redis, RabbitMQ, MongoDB, PostgreSQL, etc.) Observability stack (Grafana, Prometheus, Loki, OpenTelemetry, etc.) Azure DevOps, Bitbucket Nginx Responsibilities Automate deployments and build robust CI/CD pipelines to support global workloads Improve and maintain our Internal Developer Platform (IDP) to ensure security, efficiency, and scalability Design, build, and operate applications and services that support infrastructure and cloud environments Troubleshoot and resolve issues related to infrastructure, networking, deployments, applications, and performance Stay up to date with the latest trends and develop expertise in emerging cloud technologies Set up and optimize monitoring, alerting, and incident response processes Proactively identify and resolve performance, reliability, and security issues Collaborate with development teams to integrate SRE best practices into their workflows Conduct post-mortems and root cause analyses on incidents Qualifications Education: Bachelor’s degree in Computer Science or related field (a plus) Certified Kubernetes Administrator (CKA), or CKAD certification (a plus), Azure Solutions Architect (a plus) Benefits Lean, flat, non-hierarchical structure that will enable you to impact products and workflows A diverse, inclusive, meritocratic culture Community driven with the desire to create and share to have an impact globally Keen to challenge your skills and deepen existing expertise while learning new ones A global and quickly expanding international business culture with more than 80 nationalities Inclusive and supportive working environment with a focus on a culture of sharing Opportunity to join a globally leading energy knowledge house. Flexible work environment
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
Job Title: Data Engineer – Observability & Insights Platform Location: Remote Employment Type: Contract Experience: 5+ Years Key Responsibilities: 1. Observability Signal Correlation Integrate and analyze signals from logs, metrics, and traces using Grafana and Prometheus Enrich observability data by correlating it with business context to enable meaningful insights 2. Data Enrichment & Pipeline Development Build and maintain data pipelines to enhance technical signals with business metadata Leverage OpenTelemetry (OTel) for observability instrumentation across systems 3. Machine Learning Integration Design, build, and deploy ML models for anomaly detection, forecasting, and incident noise reduction Continuously improve ML solutions to increase relevance and business value of incident signals 4. Disruption Prediction & Risk Mitigation Identify trends and patterns that can predict business disruptions and support preemptive actions 5. Action Enablement Make observability insights actionable for business stakeholders through accessible dashboards and tools Support both automated and manual decision-making processes 6. Cross-Functional Collaboration Work closely with IT, DevOps, and Business teams to ensure alignment between technical implementations and business objectives 7. Continuous Improvement Monitor and optimize data pipelines for accuracy, reliability, and performance Required Skills & Qualifications: Proven experience as a Data Engineer or in a similar role focused on observability and analytics Strong proficiency in SQL and Python Experience working on Google Cloud Platform (GCP) Expertise in BigQuery for business intelligence and analytics Hands-on knowledge of Grafana, Prometheus, and Splunk as monitoring/observability tools Familiarity with OpenTelemetry (OTel) for observability instrumentation Experience with big data technologies such as Apache Spark, Kafka, and Airflow Machine Learning & Analytical Expertise: Practical experience applying ML techniques to observability data for anomaly detection and forecasting Ability to reduce noise in incident alerts and deliver more relevant and high-value insights Strong analytical mindset to interpret complex datasets and identify actionable trends Soft Skills: Excellent communication and collaboration skills to work across technical and business stakeholders Strong problem-solving abilities and a passion for using data to address real business challenges.
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
Company Description Evallo is a leading provider of a comprehensive SaaS platform for tutors and tutoring businesses, revolutionizing education management. With features like advanced CRM, profile management, standardized test prep, automatic grading, and insightful dashboards, we empower educators to focus on teaching. We're dedicated to pushing the boundaries of ed-tech and redefining efficient education management. Why this role matters Evallo is scaling from a focused tutoring platform to a modular operating system for all service businesses that bill by the hour. As we add payroll, proposals, white-boarding, and AI tooling, we need a Solution Architect who can translate product vision into a robust, extensible technical blueprint. You’ll be the critical bridge between product, engineering, and customers—owning architecture decisions that keep us reliable at 5k+ concurrent users and cost-efficient at 100k+ total users. Outcomes we expect Map current backend + frontend, flag structural debt, and publish an Architecture Gap Report Define naming & layering conventions, linter / formatter rules, and a lightweight ADR process Ship reference architecture for new modules Lead cross-team design reviews; no major feature ships without architecture sign-off Eventual goal is to have Evallo run in a fully observable, autoscaling environments with < 10 % infra cost waste. Monitoring dashboards should trigger < 5 false positives per month. Day-to-day Solution Design: Break down product epics into service contracts, data flows, and sequence diagrams. Choose the right patterns—monolith vs. microservice, event vs. REST, cache vs. DB index—based on cost, team maturity, and scale targets. Platform-wide Standards: Codify review checklists (security, performance, observability) and enforce via GitHub templates and CI gates. Champion a shift-left testing mindset; critical paths reach 80 % automated coverage before QA touches them. Scalability & Cost Optimization: Design load-testing scenarios that mimic 5 k concurrent tutoring sessions; guide DevOps on autoscaling policies and CDN strategy. Audit infra spend monthly; recommend serverless, queuing, or data-tier changes to cut waste. Release & Environment Strategy: Maintain clear promotion paths: local → sandbox → staging → prod with one-click rollback. Own schema-migration playbooks; zero-downtime releases are the default, not the exception. Technical Mentorship: Run fortnightly architecture clinics; level-up engineers on domain-driven design and performance profiling. Act as tie-breaker on competing technical proposals, keeping debates respectful and evidence-based. Qualifications 5+ yrs engineering experience, 2+ yrs in a dedicated architecture or staff-level role on a high-traffic SaaS product. Proven track record designing multi-tenant systems that scaled beyond 50 k users or 1k RPM. Deep knowledge of Node.js / TypeScript (our core stack), MongoDB or similar NoSQL, plus comfort with event brokers (Kafka, NATS, or RabbitMQ). Fluency in AWS (preferred) or GCP primitives—EKS, Lambda, RDS, CloudFront, IAM. Hands-on with observability stacks (Datadog, New Relic, Sentry, or OpenTelemetry). Excellent written communication; you can distill technical trade-offs in one page for execs and in one diagram for engineers.
Posted 3 weeks ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
iMerit is a leading AI data solutions company specializing in transforming unstructured data into structured intelligence for advanced machine learning and analytics applications. Our clients span autonomous mobility, medical AI, agriculture, and more—powering next-generation AI systems with high-quality data services. About the Role We are seeking a skilled Data Engineer to help scale and enhance our internal data observability and analytics platform. This platform integrates with data annotation tools and ML pipelines to provide visibility, insights, and automation across large-scale data operations. You will design and optimize robust data pipelines, build integrations with internal platforms (e.g., AngoHub, 3DPCT) and customer platforms, and support real-time metrics, dashboards, and workflows critical to customer delivery and operational excellence. Key Responsibilities ● Design and build scalable batch and real-time data pipelines across structured and unstructured sources. ● Integrate analytics and observability services with upstream annotation tools and downstream ML validation systems to enable full-cycle traceability. ● Collaborate with product, platform, and analytics teams to define event models, metrics, and data contracts. ● Develop ETL/ELT workflows using tools like AWS Glue, PySpark, or Airflow; ensure data quality, lineage, and reconciliation. ● Implement observability pipelines and alerts for mission-critical metrics (e.g., annotation throughput, quality KPIs, latency). ● Build data models and queries to power dashboards and insights via tools like Athena, QuickSight, or Redash. ● Contribute to infrastructure-as-code and CI/CD practices for deployment across cloud environments (preferably AWS). ● Document architecture, data flow, and support runbooks; continuously improve platform performance and resilience. ● Integrate with customer data platforms and pipelines, including bespoke data frameworks. Minimum Qualifications ● 4–8 years of experience in data engineering or backend development in data-intensive environments. ● Proficient in Python and SQL; familiarity with PySpark or other distributed processing frameworks. ● Strong experience with cloud-native data tools and services (S3, Lambda, Glue, Kinesis, Firehose, RDS). ● Familiarity with frameworks like Apache Hadoop, Apache Spark, and related tools for handling large datasets. ● Experience with data lake and warehouse patterns (e.g., Delta Lake, Redshift, Snowflake). ● Solid understanding of data modeling, schema design, and versioned datasets. ● Data Governance and Security: Understanding and implementing data governanc policies and security measures. ● Proven experience in building resilient, production-grade pipelines and troubleshooting live systems. ● Working knowledge of messaging frameworks like Kafka, Firehose etc ● Working knowledge of API frameworks, robust and performant API design ● Good working knowledge of Database fundamentals, relational databases and SQL Preferred Qualifications ● Experience with observability/monitoring systems (e.g., Prometheus, Grafana, OpenTelemetry) is a plus. ● Familiarity with data governance, RBAC, PII redaction, or compliance in analytics platforms. ● Exposure to annotation/ML workflow tools or ML model validation platforms. ● Comfort working in Agile, distributed teams using tools like Git, JIRA, and Slack. Why Join Us? You’ll work at the intersection of AI, data infrastructure, and impact—contributing to platforms that ensure AI is explainable, auditable, and ethical at scale. Join a team building the next generation of intelligent data operations.
Posted 3 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Seeking a highly skilled and experienced Senior .NET Developer to join our team, working closely with Red Hat and customer teams. This role is pivotal in designing, developing, and, crucially, mentoring others in the adoption of modern Cloud Native Development practices. If you're passionate about pairing, fostering technical growth, and building robust microservices-based solutions with .NET and Podman, we want to hear from you. Key Responsibilities • Lead the design, development, and implementation of high-quality, scalable, and secure microservices using C# and the .NET (Core) ecosystem. • Drive the adoption and implementation of Continuous Delivery (CD) pipelines, ensuring efficient and reliable software releases for microservices. • Highly skilled in Test-Driven Development (TDD) practices, writing comprehensive unit, integration, and end-to-end tests to ensure code quality and maintainability within a microservices architecture. • Design, develop, and deploy .NET microservices within containers, leveraging inner loop practices • Utilize Podman/Docker Compose (or similar multi-container tooling compatible with Podman) for local development environments and multi-service microservices application setups. • Implement robust API Testing strategies, including automated tests for RESTful APIs across microservices. • Integrate and utilize Observability tools and practices (e.g., logging, metrics, tracing) to monitor application health, performance, and troubleshoot issues effectively in a containerized microservices environment. • Collaborate closely with product owners, architects, and other developers to translate business requirements into technical solutions, specifically focusing on microservices design. • Play a key mentoring role, actively participating in pairing sessions, providing technical guidance, and fostering the development of junior and mid-level engineers in microservices development. • Contribute to code reviews with an eye for quality, maintainability, and knowledge transfer within a microservices context. • Actively participate in architectural discussions and contribute to technical decision-making, particularly concerning microservices design patterns, containerization strategies with Podman, and overall system architecture. • Stay up-to-date with emerging technologies and industry best practices in .NET, microservices, and containerization, advocating for their adoption where appropriate. • Troubleshoot and debug complex issues across various environments, including Podman containers and distributed microservices. Required Skills and Experience • 7+ years of professional experience in software development with a strong focus on the Microsoft .NET (Core) ecosystem (ideally .NET 6+ or .NET 8+). • Expertise in C# and building modern applications with .NET Core. • Demonstrable experience designing, developing, and deploying Microservices Architecture. • Demonstrable experience with Continuous Delivery (CD) principles and tools (e.g., Azure DevOps, GitLab CI/CD, Jenkins). • Proven track record of applying Test-Driven Development (TDD) methodologies. • Strong practical experience with Podman, including building and running .NET applications in Podman containers, and an understanding of its daemonless/rootless architecture benefits. • Proficiency in using Podman Compose (or similar approaches) for managing multi-container .NET applications locally. • Extensive experience with API Testing frameworks and strategies (e.g., Postman, Newman, SpecFlow, Playwright, XUnit/NUnit for integration tests). • Deep understanding and practical experience with Observability principles and tools (e.g., Application Insights, Prometheus, Grafana, OpenTelemetry, ELK Stack, Splunk). • Solid understanding of RESTful API design and development. • Experience with relational databases (e.g., SQL Server, PostgreSQL) and ORMs (e.g., Entity Framework Core). • Excellent mentorship and communication skills, with a passion for knowledge sharing and team development. • Excellent problem-solving, analytical, and communication skills. • Ability to work independently and as part of a collaborative team.
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Devops Developer , will provide you with the opportunity to help our clients leverage to enhance their customer Responsibilities: . CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry ) Mandatory skill sets: . CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry) Preferred skill sets: CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry) Years of experience required: 4+Yrs Education qualification: BE/B.Tech/MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills CI/CD Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: As our Agentic System Architect, you will define and own the end-to-end architecture of our Python-based autonomous agent platform. Leveraging cutting-edge frameworks—LangChain, LangGraph, RAG pipelines, and more—you’ll ensure our multi-agent workflows are resilient, scalable, and aligned with business objectives Key Responsibilities Architectural Strategy & Standards Define system topology: microservices, agent clusters, RAG retrieval layers, and knowledge-graph integrations. Establish architectural patterns for chain-based vs. graph-based vs. retrieval-augmented workflows. Component & Interface Design Specify Python modules for LLM integration, RAG connectors (Haystack, LlamaIndex), vector store adapters, and policy engines. Design REST/gRPC and message-queue interfaces compatible with Kafka/RabbitMQ, Semantic Kernel, and external APIs. Scalability & Reliability Architect auto-scaling of Python agents on Kubernetes/EKS (including GPU-enabled inference pods). Define fault-tolerance patterns (circuit breakers, retries, bulkheads) and lead chaos-testing of agent clusters. Security & Governance Embed authentication/authorization in agent flows (OIDC, OAuth2) and secure data retrieval (encrypted vector stores). Implement governance: prompt auditing, model-version control, drift detection, and usage quotas. Performance & Cost Optimization Specify profiling/tracing requirements (OpenTelemetry in Python) across chain, graph, and RAG pipelines. Architect caching layers and GPU/CPU resource policies to minimize inference latency and cost. Cross-Functional Leadership Collaborate with AI research, DevOps, and product teams to align roadmaps with strategic goals. Review and enforce best practices in Python code, CI/CD (GitHub Actions), and IaC (Terraform). 7. Documentation & Evangelism Produce architecture diagrams, decision records, and runbooks illustrating agentic designs (ReAct, CoT, RAG). Mentor engineers on agentic patterns—chain-of-thought, graph traversals, retrieval loops—and Python best practices. Preferred Qualifications Bachelor’s Degree in Computer Science, Information Technology, or related fields (e.g., B.Tech, B.E., B.Sc. in Computer Science) Preferred/Ideal Educational Qualification: Master’s Degree (optional but highly valued) in one of the following: M.Tech or M.E. in Computer Science / AI / Data Science M.Sc. in Artificial Intelligence or Machine Learning Integrated M.Tech programs in AI/ML from top-tier institutions like IITs, IIIT-H, IISc Bonus or Value-Add Qualifications: Ph.D. or Research Experience in NLP, Information Retrieval, or Agentic AI (especially relevant if applying to R&D-heavy teams like Microsoft Research, TCS Research, or AI startups) Certifications or online credentials in: LangChain, RAG architectures (DeepLearning.AI, Cohere, etc.) Advanced Python (Coursera/edX/Springboard/NPTEL) Cloud-based ML operations (AWS/Azure/GCP) Additional Skill Set: Hands-on with agentic frameworks: LangChain, LangGraph, Microsoft AutoGen Experience building RAG pipelines with Haystack, LlamaIndex, or custom retrieval modules Familiarity with vector databases (FAISS, Pinecone, Chroma) and knowledge-graph stores (Neo4j) Expertise in observability stacks (Prometheus, Grafana, OpenTelemetry) Background in LLM SDKs (OpenAI, Anthropic) and function-calling paradigms Core Skills & Competencies System Thinking: Decompose complex business goals into modular, maintainable components Python Mastery: Idiomatic Python, async/await, package management (Poetry/venv) Distributed Design: Microservices, agent clusters, RAG retrieval loops, event streams Security-First: Embed authentication, authorization, and auditabilitys Leadership: Communicate complex system designs clearly to both technical and non-technical stakeholders We are looking for someone with a proven track record in leveraging cuing-edge agentic frameworks and protocols. This includes hands-on experience with technologies such as Agent-to-Agent (A2A) communication protocols, LangGraph, LangChain, CrewAI, and other similar multi-agent orchestration tools. Your expertise will be crucial in transforming traditional, reactive AI applications into proactive, goal-driven intelligent agents that can signicantly enhance operational eciency, decision-making, and customer engagement in high-stakes domains. We envision this role as instrumental in driving innovation, translating cuing-edge academic research into deployable solutions, and contributing to the development of robust, scalable, and ethical AI agentic systems.
Posted 3 weeks ago
0 years
6 - 8 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist. In this role, you will: Design & Develop Observability Solutions: Build and enhance telemetry pipelines for logs, metrics, and traces using industry-standard tools (kafka, OpenTelemetry, Splunk) Instrument Applications: Implement observability best practices in infrastructure, applications and platforms. Design and Implement machine learning models to analyze logs, metrics and traces for anomaly detection, predictive failure analysis and root cause analysis. Monitor & Analyze System Performance: Build and Develop real-time data visualization dashboards and alerts to track system health, detect anomalies, and support real-time troubleshooting. Work with Event-Driven Architectures: Integrate observability with messaging systems like Kafka, RabbitMQ, or Pulsar for real-time monitoring. Collaborate Across Teams: Work closely with SREs, DevOps, and development teams to improve system reliability and incident response. Security & Compliance: Ensure observability data is securely stored and compliant with relevant regulations (GDPR, HIPAA, etc.). Optimize Performance: Conduct root cause analysis and improve system observability to reduce downtime and improve response times. Requirements To be successful in this role, you should meet the following requirements: Data Science & Machine Learning experience: Hands-on proficiency in Python, TensorFlow, PyTorch, Scikit-learn, Pandas, NumPy. Extensive knowledge of ETL techniques: Data extraction, transformation, and loading using Apache Airflow, Apache NiFi, Spark or similar tools Observability Stack: Hands-on experience with Prometheus, Grafana, ELK Stack, Loki, OpenTelemetry, Jaeger, or Zipkin. Experience with Time-Series Analysis, Predictive Analytics and AI-driven Observability. Cloud & Infrastructure: Experience with AWS, Azure, or GCP observability services (e.g., CloudWatch, Azure Monitor). Distributed Systems & Microservices: Understanding of Kubernetes, Docker, and Service Mesh technologies (Istio, Linkerd). Event-Driven Architectures: Experience with Kafka, RabbitMQ, or other message brokers. Database & Storage: Familiarity with time-series databases (InfluxDB, VictoriaMetrics) and NoSQL/SQL databases. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 4 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Seeking a highly skilled and experienced Senior .NET Developer to join our team, working closely with Red Hat and customer teams. This role is pivotal in designing, developing, and, crucially, mentoring others in the adoption of modern Cloud Native Development practices. If you're passionate about pairing, fostering technical growth, and building robust microservices-based solutions with .NET and Podman, we want to hear from you. Key Responsibilities • Lead the design, development, and implementation of high-quality, scalable, and secure microservices using C# and the .NET (Core) ecosystem. • Drive the adoption and implementation of Continuous Delivery (CD) pipelines, ensuring efficient and reliable software releases for microservices. • Highly skilled in Test-Driven Development (TDD) practices, writing comprehensive unit, integration, and end-to-end tests to ensure code quality and maintainability within a microservices architecture. • Design, develop, and deploy .NET microservices within containers, leveraging inner loop practices • Utilize Podman/Docker Compose (or similar multi-container tooling compatible with Podman) for local development environments and multi-service microservices application setups. • Implement robust API Testing strategies, including automated tests for RESTful APIs across microservices. • Integrate and utilize Observability tools and practices (e.g., logging, metrics, tracing) to monitor application health, performance, and troubleshoot issues effectively in a containerized microservices environment. • Collaborate closely with product owners, architects, and other developers to translate business requirements into technical solutions, specifically focusing on microservices design. • Play a key mentoring role, actively participating in pairing sessions, providing technical guidance, and fostering the development of junior and mid-level engineers in microservices development. • Contribute to code reviews with an eye for quality, maintainability, and knowledge transfer within a microservices context. • Actively participate in architectural discussions and contribute to technical decision-making, particularly concerning microservices design patterns, containerization strategies with Podman, and overall system architecture. • Stay up-to-date with emerging technologies and industry best practices in .NET, microservices, and containerization, advocating for their adoption where appropriate. • Troubleshoot and debug complex issues across various environments, including Podman containers and distributed microservices. Required Skills and Experience • 7+ years of professional experience in software development with a strong focus on the Microsoft .NET (Core) ecosystem (ideally .NET 6+ or .NET 8+). • Expertise in C# and building modern applications with .NET Core. • Demonstrable experience designing, developing, and deploying Microservices Architecture. • Demonstrable experience with Continuous Delivery (CD) principles and tools (e.g., Azure DevOps, GitLab CI/CD, Jenkins). • Proven track record of applying Test-Driven Development (TDD) methodologies. • Strong practical experience with Podman, including building and running .NET applications in Podman containers, and an understanding of its daemonless/rootless architecture benefits. • Proficiency in using Podman Compose (or similar approaches) for managing multi-container .NET applications locally. • Extensive experience with API Testing frameworks and strategies (e.g., Postman, Newman, SpecFlow, Playwright, XUnit/NUnit for integration tests). • Deep understanding and practical experience with Observability principles and tools (e.g., Application Insights, Prometheus, Grafana, OpenTelemetry, ELK Stack, Splunk). • Solid understanding of RESTful API design and development. • Experience with relational databases (e.g., SQL Server, PostgreSQL) and ORMs (e.g., Entity Framework Core). • Excellent mentorship and communication skills, with a passion for knowledge sharing and team development. • Excellent problem-solving, analytical, and communication skills. • Ability to work independently and as part of a collaborative team
Posted 4 weeks ago
6.0 years
10 - 16 Lacs
Chennai, Tamil Nadu, India
On-site
About The Company (Industry & Sector) A rapidly-scaling SaaS provider for warehouse automation, inventory planning and last-mile logistics , delivering cloud-native platforms that orchestrate replenishment, picking workflows and real-time visibility for global retailers and 3PLs. Leveraging a microservices stack built on ASP.NET Core, SignalR and Blazor , the engineering culture prizes clean architecture, high availability and developer autonomy—empowering teams to ship mission-critical APIs that move millions of units daily. Role & Responsibilities (max 6) Design, develop and own RESTful APIs with ASP.NET Core 6/7, powering modules such as the Replenishment Engine, Pick/Bulk workflow and CSV-driven business rules. Implement real-time messaging & notifications via SignalR and optimize for sub-second updates across web, mobile and IoT clients. Enforce enterprise-grade security—ADFS/SAML SSO, RBAC and token lifecycles—while keeping services stateless and auditable. Drive performance, scalability and fault-tolerance, using Redis caching, async patterns and rigorous load testing. Integrate code with GitLab/Jenkins CI/CD, writing thorough unit/integration tests and automated deployment pipelines. Partner with Blazor, mobile and data teams in agile rituals (sprint planning, code reviews, pair programming) to ship value every iteration. Skills & Qualifications Must-Have 4–6 years’ backend development, including 2 yrs+ building ASP.NET Core Web APIs (v6/7). Expert C#, Entity Framework Core/LINQ/SQL Server, and proven skill designing stateless, versioned microservices. Hands-on SignalR for real-time comms plus experience parsing/validating structured files (CSV). Solid grasp of API security, authentication & authorization (ADFS, SAML, JWT, RBAC). Proficiency with Git, CI/CD (GitLab or Jenkins), API versioning and automated testing. Performance-tuning mindset—profiling, caching (Redis) and telemetry with Serilog or similar. Preferred Exposure to Blazor or other SPA frameworks, mobile-backend integration and SSRS reporting. Familiarity with containerisation/Kubernetes, message queues (RabbitMQ, Azure Service Bus) and observability (OpenTelemetry). Experience implementing CQRS/event-sourcing patterns or distributed transaction strategies. Certification in Microsoft Azure or .NET, or contributions to OSS libraries, tooling or tech blogs. Background optimising large SQL workloads and designing highly concurrent systems. Passion for mentoring peers and championing clean code, DDD and SOLID principles. Skills: ci/cd,integration testing,serilog,authentication,ssrs,signalr,saml,sql server,rbac,blazor,.net,linq,entity framework core,authorization,jwt,git,adfs,.net core,opentelemetry,asp.net,api security,unit testing,performance tuning,kubernetes,c#,redis,jenkins,azure service bus,sql,asp.net core,asp.net core 6/7,rabbitmq,gitlab
Posted 4 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Seeking a highly skilled and experienced Senior .NET Developer to join our team, working closely with Red Hat and customer teams. This role is pivotal in designing, developing, and, crucially, mentoring others in the adoption of modern Cloud Native Development practices. If you're passionate about pairing, fostering technical growth, and building robust microservices-based solutions with .NET and Podman, we want to hear from you. Key Responsibilities • Lead the design, development, and implementation of high-quality, scalable, and secure microservices using C# and the .NET (Core) ecosystem. • Drive the adoption and implementation of Continuous Delivery (CD) pipelines, ensuring efficient and reliable software releases for microservices. • Highly skilled in Test-Driven Development (TDD) practices, writing comprehensive unit, integration, and end-to-end tests to ensure code quality and maintainability within a microservices architecture. • Design, develop, and deploy .NET microservices within containers, leveraging inner loop practices • Utilize Podman/Docker Compose (or similar multi-container tooling compatible with Podman) for local development environments and multi-service microservices application setups. • Implement robust API Testing strategies, including automated tests for RESTful APIs across microservices. • Integrate and utilize Observability tools and practices (e.g., logging, metrics, tracing) to monitor application health, performance, and troubleshoot issues effectively in a containerized microservices environment. • Collaborate closely with product owners, architects, and other developers to translate business requirements into technical solutions, specifically focusing on microservices design. • Play a key mentoring role, actively participating in pairing sessions, providing technical guidance, and fostering the development of junior and mid-level engineers in microservices development. • Contribute to code reviews with an eye for quality, maintainability, and knowledge transfer within a microservices context. • Actively participate in architectural discussions and contribute to technical decision-making, particularly concerning microservices design patterns, containerization strategies with Podman, and overall system architecture. • Stay up-to-date with emerging technologies and industry best practices in .NET, microservices, and containerization, advocating for their adoption where appropriate. • Troubleshoot and debug complex issues across various environments, including Podman containers and distributed microservices. Required Skills and Experience • 7+ years of professional experience in software development with a strong focus on the Microsoft .NET (Core) ecosystem (ideally .NET 6+ or .NET 8+). • Expertise in C# and building modern applications with .NET Core. • Demonstrable experience designing, developing, and deploying Microservices Architecture. • Demonstrable experience with Continuous Delivery (CD) principles and tools (e.g., Azure DevOps, GitLab CI/CD, Jenkins). • Proven track record of applying Test-Driven Development (TDD) methodologies. • Strong practical experience with Podman, including building and running .NET applications in Podman containers, and an understanding of its daemonless/rootless architecture benefits. • Proficiency in using Podman Compose (or similar approaches) for managing multi-container .NET applications locally. • Extensive experience with API Testing frameworks and strategies (e.g., Postman, Newman, SpecFlow, Playwright, XUnit/NUnit for integration tests). • Deep understanding and practical experience with Observability principles and tools (e.g., Application Insights, Prometheus, Grafana, OpenTelemetry, ELK Stack, Splunk). • Solid understanding of RESTful API design and development. • Experience with relational databases (e.g., SQL Server, PostgreSQL) and ORMs (e.g., Entity Framework Core). • Excellent mentorship and communication skills, with a passion for knowledge sharing and team development. • Excellent problem-solving, analytical, and communication skills. • Ability to work independently and as part of a collaborative team.
Posted 4 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Title: Senior DevOps Engineer Location: India (Remote) About Us: Platform9: A Better Way to Go Cloud Native Platform9 is a leader in simplifying enterprise private clouds. Our flagship product, Private Cloud Director, turns existing infrastructure into a full-featured private cloud. Enterprise IT teams can manage VMs and containers with familiar GUI tools and automated APIs in a private, secure environment. Enterprises are selecting Platform9's Private Cloud Director to migrate away from legacy virtualization platforms because it meets all of the following enterprise requirements: - Familiar VM management experience - Critical enterprise virtualization features: HA, DRR, networking, scale, reliability - Compatibility with all existing hardware environments, including 3rd-party storage - Automated migration tooling that lowers cost barrier by 10x Platform9 was founded by a team of VMware cloud pioneers and has over 30,000 nodes in production at some of the world’s largest enterprises, including Cloudera, EBSCO, Juniper Networks, and Rackspace. Platform9 is an inclusive, globally distributed company backed by prominent investors, committed to driving private cloud innovation and efficiency About the Role We are seeking a highly motivated and experienced Senior DevOps Engineer to join our growing team. In this role, you will be responsible for the design, implementation, and maintenance of our cloud infrastructure, ensuring high availability, scalability, and security. You will be working closely with our engineering team to automate deployments, manage infrastructure as code, and troubleshoot production issues. This is a unique opportunity to work on cutting-edge technologies and contribute to the success of a rapidly growing company. We offer a fast-paced and collaborative work environment where you will have the opportunity to learn and grow your skills. Responsibilities * Design, implement, and maintain our cloud infrastructure on AWS, including Kubernetes clusters, OpenStack environments, and supporting services. * Automate infrastructure provisioning, configuration management, and application deployments using tools like Terraform. * Implement and manage monitoring and logging solutions using Prometheus, Grafana, and other relevant tools. * Develop and maintain internal tooling and scripts to improve operational efficiency. * Troubleshoot and resolve production issues related to infrastructure, applications, and performance. * Collaborate with engineering teams to implement and maintain CI/CD pipelines. * Participate in on-call rotation to ensure 24/7 availability of critical services. * Stay up-to-date on the latest technologies and trends in cloud computing and DevOps. Qualifications * 5+ years of experience in a DevOps or SRE role, with a strong understanding of cloud infrastructure and operations. * Extensive experience with Kubernetes, including cluster administration, deployment strategies, and troubleshooting. * Experience with OpenStack is highly desirable, but not required. * Proficiency in infrastructure-as-code tools like Terraform or Ansible. * Strong scripting skills in Python or similar languages. * Strong programming skills in Golang or similar languages. * Strong configuration management skills with Salt, Chef or similar languages. * Experience with Observability tools like Prometheus, Cortex, Grafana, and Loki. * Experience with CI/CD tools and best practices. * Experience with administrating and debugging on Linux-based operating systems. * Excellent problem-solving and troubleshooting skills. * Strong communication and collaboration skills. * Strong incident management experience. Bonus Points * Experience with EKS (Elastic Kubernetes Service). * Experience with Cluster API, Cluster API Provider for AWS * Experience with managing on-premise infrastructure. * Familiarity with OpenTelemetry and AI-powered observability tools. * Experience working in a fast-paced startup environment.
Posted 4 weeks ago
6.0 years
0 Lacs
India
On-site
What We’re Looking For 6+ years of backend/data engineering experience; 2+ years leading teams. Proven success delivering large-scale data migrations or complex ETL projects in production SaaS environments. Deep expertise with Python (Pandas, SQLAlchemy/Django ORM), PostgreSQL , and AWS (S3, RDS, ECS or EKS). Comfort with message queues/workers (Celery, Redis, or similar) and distributed task orchestration. Strong SQL chops—able to profile schemas, diagnose locks, and optimize batch loads. Bonus Points Experience in migrating healthcare EMRs or other regulated PHI data; familiarity with HL7, FHIR, or HIPAA. Hands-on with data-quality platforms (Great Expectations), observability stacks (Datadog, OpenTelemetry), or AI/LLM-driven QA. Familiar with Django, Pydantic, or FastAPI in a monorepo setting. Startup DNA—you iterate rapidly, wear many hats, and enjoy greenfield architecture. About the Role As Senior Migrations Engineering Lead you will own the end-to-end data-migration function—from scoping and tooling through execution and QA—and grow a small but elite team into a disciplined practice. You’ll design repeatable frameworks that ingest messy exports from legacy systems, transform them into our domain model, and land them with zero downtime and forensic-level accuracy. Your work is mission-critical: every smooth cut-over accelerates revenue recognition and strengthens customer trust. What You'll Do Lead & Mentor – Manage 3-6 engineers and data analysts; hire, coach, and set a high technical bar. Architect Tooling – Design scalable ETL pipelines with automated mapping, validation, and rollback. Own the Roadmap – Prioritize migration projects in partnership with Ops, Sales, and Product; drive clear timelines and SLAs. Guarantee Data Integrity – Implement rigorous QA (automated tests, manual checks, reconciliation queries, AI-assisted audits) meeting ≥ 99.9% accuracy targets. Standardize Processes – Create playbooks, templates, and documentation so migrations feel “push-button” to new hires and customers alike. Reduce Risk – Champion observability, HIPAA compliance, PHI encryption, and least-privilege access across tooling and S3 data lakes. Collaborate Cross-Functionally – Align with Customer Success on requirements scoping, Finance on revenue cut-over, and Core Engineering on schema evolution. Continuously Improve – Track metrics (cycle time, defect rate, hours per migration) and drive iterative optimizations. Excellent project-management and stakeholder-communication skills; you keep Sales and Execs un-surprised. Passion for building tooling , automation , and documentation that make future migrations faster and safer. Why Join Us? High Impact – Your work directly accelerates revenue and customer delight. Leadership Opportunity – Build the migrations org from the ground up. Cutting-Edge Stack – Modern Python, Postgres, AI-powered tooling, and cloud-native infrastructure. Mission-Driven – Help private practices deliver better hearing healthcare at scale.
Posted 4 weeks ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Profile : Site Reliability Engineer (SRE) Experience : 3+ Years Location : Bangalore, Chennai, Gurgaon, Pune Shift : 12:00 AM to 9:00 PM Skills & Requirements Dynatrace expertise (On-prem & SaaS) Python coding (NOT just scripting) SLI/SLO/SLA implementation OpenTelemetry & instrumentation AWS Services (CloudWatch, X-Ray, Lambda) OpenShift ROSA experience Grafana dashboard creation Observability platform knowledge Key Responsibilities Design, implement and maintain Dynatrace monitoring solutions across on-premise and SaaS environments Develop robust Python applications and automation tools for infrastructure management Define, implement and continuously monitor SLI/SLO/SLA metrics for critical business services Configure OpenTelemetry instrumentation for distributed tracing and metrics collection Monitor, troubleshoot and optimize AWS services including CloudWatch, X-Ray, and Lambda functions Manage OpenShift ROSA clusters and containerized applications on AWS Create comprehensive Grafana dashboards for real-time monitoring and alerting Ensure system reliability, performance optimization and capacity planning Participate in on-call rotations for production incident response and resolution Conduct root cause analysis and implement preventive measures Collaborate with development teams to improve application observability Maintain documentation for monitoring procedures and incident response playbooks (ref:hirist.tech)
Posted 4 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Employment Type: Full-Time Location: Ahmedabad, On-site Experience Required: 5+ Years About Techiebutler Techiebutler partners with startup founders and CTOs to deliver high-quality products quickly. We’re a focused team dedicated to execution, innovation, and solving real-world challenges with minimal bureaucracy. Role Overview We’re seeking a Senior Golang Backend Engineer to lead the design and development of scalable, high-performance backend systems. You’ll play a pivotal role in shaping our solutions, tech stack, driving technical excellence, and mentoring the team to deliver robust solutions. Key Responsibilities Design and develop scalable, high-performance backend services using Go Optimize systems for reliability, efficiency, and maintainability Establish technical standards for development, testing Mentor team members and conduct code reviews to enhance code quality Monitor and troubleshoot systems using tools like DataDog, Prometheus Collaborate with cross-functional teams on API design, integration, and architecture. What We’re Looking For Experience: 5+ years in backend development, with 3+ years in Go Cloud & Serverless: Proficient in AWS (Lambda, DynamoDB, SQS) Containerization: Hands-on experience with Docker and Kubernetes Microservices: Expertise in designing and maintaining microservices and distributed systems Concurrency: Strong understanding of concurrent programming and performance optimization Domain-Driven Design: Practical experience applying DDD principles Testing: Proficient in automated testing, TDD, and BDD CI/CD & DevOps: Familiarity with GitLab CI, GitHub Actions, or Jenkins Observability: Experience with ELK Stack, OpenTelemetry, or similar tools Collaboration: Excellent communication and teamwork skills in Agile/Scrum environments. Why Join Us? Work with cutting-edge technologies to shape our platform’s future Thrive in a collaborative, inclusive environment that values innovation Competitive salary and career growth opportunities Contribute to impactful projects in a fast-paced tech company. Apply Now If you’re passionate about building scalable systems and solving complex challenges, join our high-performing team! Apply today to be part of Techiebutler’s journey
Posted 4 weeks ago
0.0 - 3.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Key Responsibilities: Deliver engaging and interactive training sessions (24 hours total) based on structured modules. Teach integration of monitoring, logging, and observability tools with machine learning. Guide learners in real-time anomaly detection, incident management, root cause analysis, and predictive scaling. Support learners in deploying tools like Prometheus, Grafana, OpenTelemetry, Neo4j, Falco, and KEDA. Conduct hands-on labs using LangChain, Ollama, Prophet, and other AI/ML frameworks. Help participants set up smart workflows for alert classification and routing using open-source stacks. Prepare learners to handle security, threat detection, and runtime anomaly classification using LLMs. Provide post-training support and mentorship when necessary. Observability & Monitoring: Prometheus, Grafana, OpenTelemetry, ELK Stack, FluentBit AI/ML: Python, scikit-learn, Prophet, LangChain, Ollama (LLMs) Security Tools: Falco, KubeArmor, Sysdig Secure Dev Tools: Docker, VSCode, Jupyter Notebooks LLMs & Automation: LangChain, Neo4j, GPT-based explanation tools, Slack Webhooks.
Posted 1 month ago
2.0 years
0 Lacs
India
Remote
About the role \ \ SigNoz is a global open source project with users in 30+ countries. We are building an open-source observability platform which helps developers monitor their applications and troubleshoot problems, quickly.\ In less than a year of our launch, we have reached 22000+ Github stars, 6000+ members in the slack community and 150+ contributors.\ \ Why us? Opportunity to work in a global dev infra product Backed by YC and some of the prominent VCs in the Bay Area We are completely remote. No offices Work directly with engineering teams at high-growth companies We're Looking For Someone Who Is Technical enough to architect observability solutions - you'll work directly with engineering teams to design their OpenTelemetry instrumentation strategy, customize dashboards, and optimize for their specific use cases. Excellent at technical documentation - you'll create custom integration guides, troubleshooting docs, and best practices documentation that engineering teams actually want to follow. Capable of hands-on implementation - you'll directly contribute to customer codebases, help debug instrumentation issues, and ensure successful deployments rather than just providing guidance. Product-minded - you'll identify patterns in customer deployments and work with our product teams to build better defaults, templates, and tooling. Who Would Be a Good Fit 2-6 years experience in technical roles - DevOps, SRE, Platform Engineering, or Solutions Engineering backgrounds DevOps/Platform engineering background - Containerization, Kubernetes, infrastructure as code, cloud platforms (AWS/GCP/Azure) Strong programming skills - comfortable contributing to customer codebases in multiple languages (Go, Python, Node.js, Java) Excellent technical writing - can create clear, actionable documentation that engineers actually use Systems thinking - can understand complex distributed architectures and design monitoring strategies that scale Strong learning skills - can quickly understand new customer environments and adapt solutions accordingly Who May Not Be a Good Fit People who prefer working in isolation rather than directly with customers People who struggle with technical writing or documentation Candidates who avoid hands-on coding or technical implementation Those who prefer following established processes rather than creating new solutions
Posted 1 month ago
6.0 years
10 - 16 Lacs
Chennai, Tamil Nadu, India
On-site
About The Company (Industry & Sector) A rapidly-scaling SaaS provider for warehouse automation, inventory planning and last-mile logistics , delivering cloud-native platforms that orchestrate replenishment, picking workflows and real-time visibility for global retailers and 3PLs. Leveraging a microservices stack built on ASP.NET Core, SignalR and Blazor , the engineering culture prizes clean architecture, high availability and developer autonomy—empowering teams to ship mission-critical APIs that move millions of units daily. Role & Responsibilities (max 6) Design, develop and own RESTful APIs with ASP.NET Core 6/7, powering modules such as the Replenishment Engine, Pick/Bulk workflow and CSV-driven business rules. Implement real-time messaging & notifications via SignalR and optimize for sub-second updates across web, mobile and IoT clients. Enforce enterprise-grade security—ADFS/SAML SSO, RBAC and token lifecycles—while keeping services stateless and auditable. Drive performance, scalability and fault-tolerance, using Redis caching, async patterns and rigorous load testing. Integrate code with GitLab/Jenkins CI/CD, writing thorough unit/integration tests and automated deployment pipelines. Partner with Blazor, mobile and data teams in agile rituals (sprint planning, code reviews, pair programming) to ship value every iteration. Skills & Qualifications Must-Have 4–6 years’ backend development, including 2 yrs+ building ASP.NET Core Web APIs (v6/7). Expert C#, Entity Framework Core/LINQ/SQL Server, and proven skill designing stateless, versioned microservices. Hands-on SignalR for real-time comms plus experience parsing/validating structured files (CSV). Solid grasp of API security, authentication & authorization (ADFS, SAML, JWT, RBAC). Proficiency with Git, CI/CD (GitLab or Jenkins), API versioning and automated testing. Performance-tuning mindset—profiling, caching (Redis) and telemetry with Serilog or similar. Preferred Exposure to Blazor or other SPA frameworks, mobile-backend integration and SSRS reporting. Familiarity with containerisation/Kubernetes, message queues (RabbitMQ, Azure Service Bus) and observability (OpenTelemetry). Experience implementing CQRS/event-sourcing patterns or distributed transaction strategies. Certification in Microsoft Azure or .NET, or contributions to OSS libraries, tooling or tech blogs. Background optimising large SQL workloads and designing highly concurrent systems. Passion for mentoring peers and championing clean code, DDD and SOLID principles. Skills: ci/cd,integration testing,serilog,authentication,ssrs,signalr,saml,sql server,rbac,blazor,.net,linq,entity framework core,authorization,jwt,git,adfs,.net core,opentelemetry,asp.net,api security,unit testing,performance tuning,kubernetes,c#,redis,jenkins,azure service bus,sql,asp.net core,asp.net core 6/7,rabbitmq,gitlab
Posted 1 month ago
5.0 years
17 - 25 Lacs
Chennai, Tamil Nadu, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 1700000 - Rs 2500000 (ie INR 17-25 LPA) Min Experience: 5 years Location: Chennai, anywhere in tamilnadu JobType: full-time Notice Period: Immediate joiners or within 30 days preferred Qualification: Bachelor's degree in Computer Science, Information Technology, or a related field Requirements Key Responsibilities & Skills Core Competencies Automation: Expertise in automating infrastructure using tools like CDK, CloudFormation, and Terraform CI/CD Pipelines: Hands-on experience with GitHub Actions, Jenkins, or similar tools for continuous integration and deployment Monitoring & Observability: Familiar with tools like OpenTelemetry, Prometheus, and Grafana API & Load Balancing: Understanding of REST, gRPC, Protocol Buffers, API Gateway, and load balancing techniques Technical Requirements Strong foundation in Linux OS and system concepts Experience handling production issues and ensuring system reliability Proficient in at least one programming or scripting language (Python, Go, or Shell) Familiarity with Docker, microservices architecture, and cloud-native tools like Kubernetes Understanding of RDBMS/NoSQL databases such as PostgreSQL and MongoDB Additional Skills Awareness of security practices including OWASP, static code analysis, etc. Familiarity with fintech security standards (e.g., PCI-DSS, SOC2) is a plus AWS certifications are an added advantage Knowledge of AWS data services like DMS, Glue, Athena, and Redshift is a bonus Experience working in start-up environments and with distributed teams is desirable Key Skills DevOps | AWS | Terraform | CI/CD | Linux | Docker | Kubernetes | Python | Infrastructure Automation | Monitoring & Observability
Posted 1 month ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Experience : 4+years Notice Period : Immediate Joiners Key Responsibilities Design and implement observability solutions across services and infrastructure. Set up and maintain SLI/SLO/SLA metrics, including tracking and reporting. Configure and manage Dynatrace (On-Prem & SaaS) environments including custom dashboards, alerting, and root cause analysis. Develop custom scripts and tools using Python to automate observability workflows. Instrument applications using OpenTelemetry for metrics, logs, and traces. Work with AWS monitoring tools like CloudWatch, X-Ray, Lambda logs, and integrate with existing observability stack. Build and manage dashboards using Grafana to visualize application health and performance. Collaborate with development and SRE teams to embed observability practices into the SDLC. Monitor and maintain OpenShift ROSA (Red Hat OpenShift on AWS) clusters for observability. Required Skills & Qualifications 4+ years of experience in observability, SRE, or DevOps roles. Strong hands-on experience with Dynatrace (both On-prem and SaaS). Proficient in Python programming for scripting and automation. Deep understanding of observability principles : SLI, SLO, SLA. Experience with OpenTelemetry and application instrumentation. Solid knowledge of AWS services like CloudWatch, X-Ray, Lambda, etc. Familiarity with OpenShift ROSA in production environments. Experience with Grafana for building custom dashboards. Ability to read, understand, and write code in any general-purpose language. Good To Have Experience integrating observability into CI/CD pipelines. Exposure to other APM tools (e.g., New Relic, Datadog). Experience with Kubernetes-native monitoring (e.g., Prometheus, Fluentd, Loki). (ref:hirist.tech)
Posted 1 month ago
6.0 years
10 - 16 Lacs
Chennai, Tamil Nadu, India
On-site
About The Company (Industry & Sector) A rapidly-scaling SaaS provider for warehouse automation, inventory planning and last-mile logistics , delivering cloud-native platforms that orchestrate replenishment, picking workflows and real-time visibility for global retailers and 3PLs. Leveraging a microservices stack built on ASP.NET Core, SignalR and Blazor , the engineering culture prizes clean architecture, high availability and developer autonomy—empowering teams to ship mission-critical APIs that move millions of units daily. Role & Responsibilities (max 6) Design, develop and own RESTful APIs with ASP.NET Core 6/7, powering modules such as the Replenishment Engine, Pick/Bulk workflow and CSV-driven business rules. Implement real-time messaging & notifications via SignalR and optimize for sub-second updates across web, mobile and IoT clients. Enforce enterprise-grade security—ADFS/SAML SSO, RBAC and token lifecycles—while keeping services stateless and auditable. Drive performance, scalability and fault-tolerance, using Redis caching, async patterns and rigorous load testing. Integrate code with GitLab/Jenkins CI/CD, writing thorough unit/integration tests and automated deployment pipelines. Partner with Blazor, mobile and data teams in agile rituals (sprint planning, code reviews, pair programming) to ship value every iteration. Skills & Qualifications Must-Have 4–6 years’ backend development, including 2 yrs+ building ASP.NET Core Web APIs (v6/7). Expert C#, Entity Framework Core/LINQ/SQL Server, and proven skill designing stateless, versioned microservices. Hands-on SignalR for real-time comms plus experience parsing/validating structured files (CSV). Solid grasp of API security, authentication & authorization (ADFS, SAML, JWT, RBAC). Proficiency with Git, CI/CD (GitLab or Jenkins), API versioning and automated testing. Performance-tuning mindset—profiling, caching (Redis) and telemetry with Serilog or similar. Preferred Exposure to Blazor or other SPA frameworks, mobile-backend integration and SSRS reporting. Familiarity with containerisation/Kubernetes, message queues (RabbitMQ, Azure Service Bus) and observability (OpenTelemetry). Experience implementing CQRS/event-sourcing patterns or distributed transaction strategies. Certification in Microsoft Azure or .NET, or contributions to OSS libraries, tooling or tech blogs. Background optimising large SQL workloads and designing highly concurrent systems. Passion for mentoring peers and championing clean code, DDD and SOLID principles. Skills: ci/cd,integration testing,serilog,authentication,ssrs,signalr,saml,sql server,rbac,blazor,.net,linq,entity framework core,authorization,jwt,git,adfs,.net core,opentelemetry,asp.net,api security,unit testing,performance tuning,kubernetes,c#,redis,jenkins,azure service bus,sql,asp.net core,asp.net core 6/7,rabbitmq,gitlab
Posted 1 month ago
6.0 years
10 - 16 Lacs
Chennai, Tamil Nadu, India
On-site
About The Company (Industry & Sector) A rapidly-scaling SaaS provider for warehouse automation, inventory planning and last-mile logistics , delivering cloud-native platforms that orchestrate replenishment, picking workflows and real-time visibility for global retailers and 3PLs. Leveraging a microservices stack built on ASP.NET Core, SignalR and Blazor , the engineering culture prizes clean architecture, high availability and developer autonomy—empowering teams to ship mission-critical APIs that move millions of units daily. Role & Responsibilities (max 6) Design, develop and own RESTful APIs with ASP.NET Core 6/7, powering modules such as the Replenishment Engine, Pick/Bulk workflow and CSV-driven business rules. Implement real-time messaging & notifications via SignalR and optimize for sub-second updates across web, mobile and IoT clients. Enforce enterprise-grade security—ADFS/SAML SSO, RBAC and token lifecycles—while keeping services stateless and auditable. Drive performance, scalability and fault-tolerance, using Redis caching, async patterns and rigorous load testing. Integrate code with GitLab/Jenkins CI/CD, writing thorough unit/integration tests and automated deployment pipelines. Partner with Blazor, mobile and data teams in agile rituals (sprint planning, code reviews, pair programming) to ship value every iteration. Skills & Qualifications Must-Have 4–6 years’ backend development, including 2 yrs+ building ASP.NET Core Web APIs (v6/7). Expert C#, Entity Framework Core/LINQ/SQL Server, and proven skill designing stateless, versioned microservices. Hands-on SignalR for real-time comms plus experience parsing/validating structured files (CSV). Solid grasp of API security, authentication & authorization (ADFS, SAML, JWT, RBAC). Proficiency with Git, CI/CD (GitLab or Jenkins), API versioning and automated testing. Performance-tuning mindset—profiling, caching (Redis) and telemetry with Serilog or similar. Preferred Exposure to Blazor or other SPA frameworks, mobile-backend integration and SSRS reporting. Familiarity with containerisation/Kubernetes, message queues (RabbitMQ, Azure Service Bus) and observability (OpenTelemetry). Experience implementing CQRS/event-sourcing patterns or distributed transaction strategies. Certification in Microsoft Azure or .NET, or contributions to OSS libraries, tooling or tech blogs. Background optimising large SQL workloads and designing highly concurrent systems. Passion for mentoring peers and championing clean code, DDD and SOLID principles. Skills: ci/cd,integration testing,serilog,authentication,ssrs,signalr,saml,sql server,rbac,blazor,.net,linq,entity framework core,authorization,jwt,git,adfs,.net core,opentelemetry,asp.net,api security,unit testing,performance tuning,kubernetes,c#,redis,jenkins,azure service bus,sql,asp.net core,asp.net core 6/7,rabbitmq,gitlab
Posted 1 month ago
6.0 years
10 - 16 Lacs
Bengaluru, Karnataka, India
On-site
About The Company (Industry & Sector) A rapidly-scaling SaaS provider for warehouse automation, inventory planning and last-mile logistics , delivering cloud-native platforms that orchestrate replenishment, picking workflows and real-time visibility for global retailers and 3PLs. Leveraging a microservices stack built on ASP.NET Core, SignalR and Blazor , the engineering culture prizes clean architecture, high availability and developer autonomy—empowering teams to ship mission-critical APIs that move millions of units daily. Role & Responsibilities (max 6) Design, develop and own RESTful APIs with ASP.NET Core 6/7, powering modules such as the Replenishment Engine, Pick/Bulk workflow and CSV-driven business rules. Implement real-time messaging & notifications via SignalR and optimize for sub-second updates across web, mobile and IoT clients. Enforce enterprise-grade security—ADFS/SAML SSO, RBAC and token lifecycles—while keeping services stateless and auditable. Drive performance, scalability and fault-tolerance, using Redis caching, async patterns and rigorous load testing. Integrate code with GitLab/Jenkins CI/CD, writing thorough unit/integration tests and automated deployment pipelines. Partner with Blazor, mobile and data teams in agile rituals (sprint planning, code reviews, pair programming) to ship value every iteration. Skills & Qualifications Must-Have 4–6 years’ backend development, including 2 yrs+ building ASP.NET Core Web APIs (v6/7). Expert C#, Entity Framework Core/LINQ/SQL Server, and proven skill designing stateless, versioned microservices. Hands-on SignalR for real-time comms plus experience parsing/validating structured files (CSV). Solid grasp of API security, authentication & authorization (ADFS, SAML, JWT, RBAC). Proficiency with Git, CI/CD (GitLab or Jenkins), API versioning and automated testing. Performance-tuning mindset—profiling, caching (Redis) and telemetry with Serilog or similar. Preferred Exposure to Blazor or other SPA frameworks, mobile-backend integration and SSRS reporting. Familiarity with containerisation/Kubernetes, message queues (RabbitMQ, Azure Service Bus) and observability (OpenTelemetry). Experience implementing CQRS/event-sourcing patterns or distributed transaction strategies. Certification in Microsoft Azure or .NET, or contributions to OSS libraries, tooling or tech blogs. Background optimising large SQL workloads and designing highly concurrent systems. Passion for mentoring peers and championing clean code, DDD and SOLID principles. Skills: ci/cd,integration testing,serilog,authentication,ssrs,signalr,saml,sql server,rbac,blazor,.net,linq,entity framework core,authorization,jwt,git,adfs,.net core,opentelemetry,asp.net,api security,unit testing,performance tuning,kubernetes,c#,redis,jenkins,azure service bus,sql,asp.net core,asp.net core 6/7,rabbitmq,gitlab
Posted 1 month ago
6.0 years
10 - 16 Lacs
Chennai, Tamil Nadu, India
On-site
About The Company (Industry & Sector) A rapidly-scaling SaaS provider for warehouse automation, inventory planning and last-mile logistics , delivering cloud-native platforms that orchestrate replenishment, picking workflows and real-time visibility for global retailers and 3PLs. Leveraging a microservices stack built on ASP.NET Core, SignalR and Blazor , the engineering culture prizes clean architecture, high availability and developer autonomy—empowering teams to ship mission-critical APIs that move millions of units daily. Role & Responsibilities (max 6) Design, develop and own RESTful APIs with ASP.NET Core 6/7, powering modules such as the Replenishment Engine, Pick/Bulk workflow and CSV-driven business rules. Implement real-time messaging & notifications via SignalR and optimize for sub-second updates across web, mobile and IoT clients. Enforce enterprise-grade security—ADFS/SAML SSO, RBAC and token lifecycles—while keeping services stateless and auditable. Drive performance, scalability and fault-tolerance, using Redis caching, async patterns and rigorous load testing. Integrate code with GitLab/Jenkins CI/CD, writing thorough unit/integration tests and automated deployment pipelines. Partner with Blazor, mobile and data teams in agile rituals (sprint planning, code reviews, pair programming) to ship value every iteration. Skills & Qualifications Must-Have 4–6 years’ backend development, including 2 yrs+ building ASP.NET Core Web APIs (v6/7). Expert C#, Entity Framework Core/LINQ/SQL Server, and proven skill designing stateless, versioned microservices. Hands-on SignalR for real-time comms plus experience parsing/validating structured files (CSV). Solid grasp of API security, authentication & authorization (ADFS, SAML, JWT, RBAC). Proficiency with Git, CI/CD (GitLab or Jenkins), API versioning and automated testing. Performance-tuning mindset—profiling, caching (Redis) and telemetry with Serilog or similar. Preferred Exposure to Blazor or other SPA frameworks, mobile-backend integration and SSRS reporting. Familiarity with containerisation/Kubernetes, message queues (RabbitMQ, Azure Service Bus) and observability (OpenTelemetry). Experience implementing CQRS/event-sourcing patterns or distributed transaction strategies. Certification in Microsoft Azure or .NET, or contributions to OSS libraries, tooling or tech blogs. Background optimising large SQL workloads and designing highly concurrent systems. Passion for mentoring peers and championing clean code, DDD and SOLID principles. Skills: ci/cd,integration testing,serilog,authentication,ssrs,signalr,saml,sql server,rbac,blazor,.net,linq,entity framework core,authorization,jwt,git,adfs,.net core,opentelemetry,asp.net,api security,unit testing,performance tuning,kubernetes,c#,redis,jenkins,azure service bus,sql,asp.net core,asp.net core 6/7,rabbitmq,gitlab
Posted 1 month ago
6.0 years
10 - 16 Lacs
Pune, Maharashtra, India
On-site
About The Company (Industry & Sector) A rapidly-scaling SaaS provider for warehouse automation, inventory planning and last-mile logistics , delivering cloud-native platforms that orchestrate replenishment, picking workflows and real-time visibility for global retailers and 3PLs. Leveraging a microservices stack built on ASP.NET Core, SignalR and Blazor , the engineering culture prizes clean architecture, high availability and developer autonomy—empowering teams to ship mission-critical APIs that move millions of units daily. Role & Responsibilities (max 6) Design, develop and own RESTful APIs with ASP.NET Core 6/7, powering modules such as the Replenishment Engine, Pick/Bulk workflow and CSV-driven business rules. Implement real-time messaging & notifications via SignalR and optimize for sub-second updates across web, mobile and IoT clients. Enforce enterprise-grade security—ADFS/SAML SSO, RBAC and token lifecycles—while keeping services stateless and auditable. Drive performance, scalability and fault-tolerance, using Redis caching, async patterns and rigorous load testing. Integrate code with GitLab/Jenkins CI/CD, writing thorough unit/integration tests and automated deployment pipelines. Partner with Blazor, mobile and data teams in agile rituals (sprint planning, code reviews, pair programming) to ship value every iteration. Skills & Qualifications Must-Have 4–6 years’ backend development, including 2 yrs+ building ASP.NET Core Web APIs (v6/7). Expert C#, Entity Framework Core/LINQ/SQL Server, and proven skill designing stateless, versioned microservices. Hands-on SignalR for real-time comms plus experience parsing/validating structured files (CSV). Solid grasp of API security, authentication & authorization (ADFS, SAML, JWT, RBAC). Proficiency with Git, CI/CD (GitLab or Jenkins), API versioning and automated testing. Performance-tuning mindset—profiling, caching (Redis) and telemetry with Serilog or similar. Preferred Exposure to Blazor or other SPA frameworks, mobile-backend integration and SSRS reporting. Familiarity with containerisation/Kubernetes, message queues (RabbitMQ, Azure Service Bus) and observability (OpenTelemetry). Experience implementing CQRS/event-sourcing patterns or distributed transaction strategies. Certification in Microsoft Azure or .NET, or contributions to OSS libraries, tooling or tech blogs. Background optimising large SQL workloads and designing highly concurrent systems. Passion for mentoring peers and championing clean code, DDD and SOLID principles. Skills: ci/cd,integration testing,serilog,authentication,ssrs,signalr,saml,sql server,rbac,blazor,.net,linq,entity framework core,authorization,jwt,git,adfs,.net core,opentelemetry,asp.net,api security,unit testing,performance tuning,kubernetes,c#,redis,jenkins,azure service bus,sql,asp.net core,asp.net core 6/7,rabbitmq,gitlab
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough