Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 15.0 years
4 - 11 Lacs
Kolkata, West Bengal, India
On-site
Role- Description We are seeking an experienced OpenSearch Engineer to join our dynamic team in India. The ideal candidate will have a strong background in search technologies, particularly OpenSearch, and will be responsible for designing and maintaining robust search solutions that meet the needs of our clients. Responsibilities Design, implement, and maintain OpenSearch clusters for optimal performance and scalability. Develop and optimize search queries, ensuring high availability and responsiveness of the search service. Monitor system performance and troubleshoot issues to ensure maximum uptime and reliability. Collaborate with cross-functional teams to understand data requirements and translate them into effective OpenSearch solutions. Implement security features and best practices to protect data and ensure compliance with relevant regulations. Stay updated with the latest trends and updates in OpenSearch and related technologies. Skills and Qualifications 8-15 years of experience in search engine technologies, with a strong focus on OpenSearch or Elasticsearch. Proficient in data modeling, search optimization, and query performance tuning. Strong knowledge of Linux/Unix systems and scripting languages (e.g., Bash, Python). Experience with data ingestion tools and ETL processes to populate search indexes. Familiarity with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes). Understanding of distributed systems and microservices architecture. Excellent problem-solving skills and ability to work under pressure. Strong communication and collaboration skills to work effectively in a team environment.
Posted 1 week ago
4.0 - 7.0 years
15 - 25 Lacs
Bengaluru
Hybrid
Role & responsibilities Design and develop robust, scalable, and high-performance search engine components using Java. Implement indexing and retrieval logic for structured and unstructured data. Integrate with tools like Apache Lucene, Solr, or Elasticsearch. Develop and maintain APIs for search functionalities. Optimize query performance and relevance using techniques like TF-IDF, BM25, or vector-based retrieval. Collaborate with product managers, data scientists, and DevOps to define system requirements and deployment pipelines. Ensure code quality with unit/integration tests, code reviews, and documentation.
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
We are looking for a Senior Site Reliability Engineer to join Okta s Workflows SRE team which is part of our Emerging Products Group (EPG). Okta Workflows is the foundation for secure integration between cloud services. By harnessing the power of the cloud, Okta allows people to quickly integrate different services, while still enforcing strong security policies. With Okta Workflows, organizations can implement no-code or low-code workflows quickly, easily, at a large scale, and low total cost. Thousands of customers trust Okta Workflows to help their organizations work faster, boost revenue, and stay secure. If you like to be challenged and have a passion for solving large-scale automation, testing, and tuning problems, we would love to hear from you. The ideal candidate is someone who exemplifies the ethics of, If you have to do something more than once, automate it and who can rapidly self-educate on new concepts and tools. What you ll be doing? Designing, building, running, and monitoring Okta Workflows and other EPG products global production infrastructure. Lead and implement secure, scalable Kubernetes clusters across multiple environments. Be an evangelist for security best practices and also lead initiatives/projects to strengthen our security posture for critical infrastructure. Responding to production incidents and determining how we can prevent them in the future. Triaging and troubleshooting complex production issues to ensure reliability and performance. Enhance automation workflows for patching, vulnerability assessments, and incident response. Continuously evolving our monitoring tools and platform. Promoting and applying best practices for building scalable and reliable services across engineering. Developing and maintaining technical documentation, runbooks, and procedures. Supporting a highly available and large scale Kubernetes and AWS environment as part of an on-call rotation. Be a technical SME for a team that designs and builds Okta's production infrastructure, focusing on security at scale in the cloud. What you ll bring to the role? Are always willing to go the extra mile: see a problem, fix the problem. Are passionate about encouraging the development of engineering peers and leading by example. Have experience with Kubernetes deployments in either AWS and/or GCP Cloud environments. Have an understanding and familiarity with configuration management tools like Chef, Terraform, or Ansible. Have expert-level abilities in operational tooling languages such as Go and shell, and use of source control. Have knowledge of various types of data stores, particularly PostgreSQL, Redis, and OpenSearch. Experience with industry-standard security tools like Nessus and OSQuery. Have knowledge of CI/CD principles, Linux fundamentals, OS hardening, networking concepts, and IP protocols. Skilled in using Datadog for real-time monitoring and proactive incident detection. Strong ability to collaborate with cross-functional teams and promote a security first culture. Experience in the following 5+ years of experience running and managing complex AWS or other cloud networking infrastructure resources including architecture, security and scalability. 5+ years of experience with Ansible, Chef, and/or Terraform 3+ years of experience in cloud security, including IAM (Identity and Access Management) and/or secure identity management for cloud platforms and Kubernetes. 3+ years of experience in automating CI/CD pipelines using tools such as Spinnaker, or ArgoCD with an emphasis on integrating security throughout the process. Proven experience in implementing monitoring and observability solutions such as Datadog or Splunk to enhance security and detect incidents in real-time. Strong leadership and collaboration skills with experience working cross-functionally with security engineers and developers to enforce security best practices and policies. Strong Linux understanding and experience. Strong security background and knowledge. BS In computer science (or equivalent experience).
Posted 1 week ago
8.0 - 10.0 years
10 - 15 Lacs
Hyderabad
Work from Office
Summary As an employee at Thomson Reuters, you will play a role in shaping and leading the global knowledge economy. Our technology drives global markets and helps professionals around the world make decisions that matter. As the worlds leading provider of intelligent information, we want your unique perspective to create the solutions that advance our businessand your career. About the Role As a Senior DevOps Engineer you will be responsible for building and supporting AWS infrastructure used to host a platform offering audit solutions. This engineer is constantly looking to optimize systems and services for security, automation, and performance/availability, while ensuring solutions developed adhere and align to architecture standards. This individual is responsible for ensuring that technology systems and related procedures adhere to organizational values. The person will also assist Developers with technical issues in the initiation, planning, and execution phases of projects. These activities include: the definition of needs, benefits, and technical strategy; research & development within the project life cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. This role will be responsible for: Plan, deploy, and maintain critical business applications in prod/non-prod AWS environments Design and implement appropriate environments for those applications, engineer suitable release management procedures and provide production support Influence broader technology groups in adopting Cloud technologies, processes, and best practices Drive improvements to processes and design enhancements to automation to continuously improve production environments Maintain and contribute to our knowledge base and documentation Provide leadership, technical support, user support, technical orientation, and technical education activities to project teams and staff Manage change requests between development, staging, and production environments Provision and configure hardware, peripherals, services, settings, directories, storage, etc. in accordance with standards and project/operational requirements Perform daily system monitoring, verifying the integrity and availability of all hardware, server resources, systems and key processes, reviewing system and application logs, and verifying completion of automated processes Perform ongoing performance tuning, infrastructure upgrades, and resource optimization as required Provide Tier II support for incidents and requests from various constituencies Investigate and troubleshoot issues Research, develop, and implement innovative and where possible automated approaches for system administration tasks About you You are fit for the role of a Senior DevOps Engineering role if your background includes: Required: 8+ years at Senior DevOps Level. Knowledge of Azure AWS cloud platform s3, cloudfront, cloudformation, RDS, OpenSearch, Active MQ. Knowledge of CI/CD, preferably on AWS Developer tools Scripting knowledge, preferably in Python Bash or Powershell Have contributed as a DevOps engineer responsible for planning, building and deploying cloud-based solutions Knowledge on building and deploying containers Kubernetes. (also, exposure to AWS EKS is preferable) Knowledge on Infrastructure as code like: Bicep or Terraform, Ansible Knowledge on GitHub Action, Powershell and GitOps Nice to have: Experience with build and deploying .net core java-based solutions Strong understanding on API first strategy Knowledge and some experience implementing testing strategy in a continuous deployment environment Have owned and operated continuous delivery deployment. Have setup monitoring tools and disaster recovery plans to ensure business continuity.
Posted 2 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Pune
Work from Office
Key Responsibilities Act as the primary technical point of contact and trusted advisor for assigned enterprise customers. Build and maintain deep, strategic relationships with customer stakeholders, including technical teams and senior leadership. Understand customer-specific configurations, customizations, and integration patterns across their AppZen environments. Facilitate alignment between customer goals and AppZen's capabilities by collaborating cross-functionally with Engineering, Product Management, Professional Services, CSM and Support. Influence internal teams with customer insights to shape product improvements and roadmap decisions. Conduct regular technical health reviews, system assessments, and operational check-ins to ensure optimal platform performance. Review upcoming customer events and planned production activities to proactively identify and mitigate risks. Drive initiatives to improve customer productivity, and long-term satisfaction. Act as the escalation point for technical incidents, coordinating resolution efforts across internal teams. Lead in-depth root cause analyses and implement preventive measures to reduce recurring issues. Enhance support processes and ticket handling by driving internal quality improvements and knowledge-sharing initiatives. Promote customer self-sufficiency by enabling teams to effectively use AppZen support tools, documentation, and best practices. Provide expert-level guidance on troubleshooting, integrations, and observability tools. What We're Looking For Experience: 3-5 years in a TAM, Technical Support, Solutions Engineering within a SaaS or enterprise software company. Domain Expertise: Background in finance automation, accounts payable, or expense auditing platforms is highly preferred. Technical Skills: Experience in Python, Go, or similar for debugging and automation. Strong knowledge of REST APIs and Postman. Proficiency in AWS Console and tools like Kibana or OpenSearch. Understanding of AI and Data Science concepts and their application in enterprise solutions. Soft Skills: Excellent verbal and written communication, problem-solving, and customer-facing engagement skills. Excellent analytical skills, highly organized and action-oriented. Mindset: Strong ownership, proactive attitude, and customer-centric approach. Education: Bachelor's degree in Computer Science, Engineering, or a related technical field.
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
Remote
About the Role The Search platform currently powers Rider and Driver Maps, Uber Eats, Groceries, Fulfilment, Freight, Customer Obsession and many such products and systems across Uber. We are building a unified platform for all of Uber's search use-cases. The team is building the platform on OpenSearch. We are already supporting in house search infrastructure built on top of Apache Lucene. Our mission is to build a fully managed search platform while delivering a delightful user experience through low-code data and control APIs . We are looking for an Engineering Manager with strong technical expertise to define a holistic vision and help builda highly scalable, reliable and secure platform for Uber's core business use-cases. Come join our team to build search functionality at Uber scale for some of the most exciting areas in the marketplace economy today. An ideal candidate will be working closely with a highly cross-functional team, including product management, engineering, tech strategy, and leadership to drive our vision and build a strong team. A successful candidate will need to demonstrate strong technical skills, system architecture / design. Having experience on the open source systems and distributed systems is a big plus for this role. The EM2 role will require building a team of software engineers, while directly contributing on the technical side too. What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Provide technical leadership, influence and partner with fellow engineers to architect, design and build infrastructure that can stand the test of scale and availability, while reducing operational overhead. Lead, manage and grow a team of software engineers. Mentor and guide the professional and technical development of engineers on your team, and continuously improve software engineering practices. Own the craftsmanship, reliability, and scalability of your solutions. Encourage innovation, implementation of ground breaking technologies, outside-of-the-box thinking, teamwork, and self-organization Hire top performing engineering talent and maintaining our dedication to diversity and inclusion Collaborate with platform, product and security engineering teams, and enable successful use of infrastructure and foundational services, and manage upstream and downstream dependencies ---- Basic Qualifications ---- Bachelor's degree (or higher) in Computer Science or related field. 10+ years of software engineering industry experience 8+ years of experience as an IC building large scale distributed software systems Outstanding technical skills in backend: Uber managers can lead from the front when the situation calls for it. 1+ years for frontline managing a diverse set of engineers ---- Preferred Qualifications ---- Prior experience with Search or big data systems - OpenSearch, Lucene, Pinot, Druid, Spark, Hive, HUDI, Iceberg, Presto, Flink, HDFS, YARN, etc preferred. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. .Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .
Posted 2 weeks ago
5.0 - 10.0 years
20 - 27 Lacs
Hyderabad
Work from Office
Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus
Posted 3 weeks ago
10.0 - 12.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Description: Oracle Cloud Infrastructure (OCI) is a pioneering force in cloud technology, merging the agility of startups with the robustness of an enterprise software leader. Within OCI, the Oracle Generative AI Service team spearheads innovative solutions at the convergence of artificial intelligence and cloud infrastructure. As part of this team, you'll contribute to large-scale cloud solutions utilizing cutting-edge machine learning technologies, aimed at addressing complex global challenges. Join us to create innovative solutions using top-notch machine learning technologies to solve global challenges. We're looking for an experienced Principal Applied Data Scientist to join our OCI Gen-AI Solutions team for strategic customers. In this role, you'll collaborate with applied scientists and product managers to design, develop, and deploy tailored Gen-AI solutions with an emphasis on Large Language Models (LLMs), Agents, MPC and Retrieval Augmented Generation (RAG) with large OpenSearch clusters. As part of the OCI Gen AI and Data Solutions for strategic customers team, you will be responsible for developing innovative Gen AI and data services for our strategic customers.As a Principal Applied Data Scientist, you'll lead the development of advanced Gen AI solutions using the latest ML technologies combined with Oracle's cloud expertise. Your work will significantly impact sectors like financial services, telecom, healthcare, and code generation by creating distributed, scalable, high-performance solutions for strategic customers. Work directly with key customers and accompany them on their Gen AI journey - understanding their requirements, help them envision and design and build the right solutions and work together with their ML engineering to remove blockers. You will dive deep into model structure to optimize model performance and scalability. You will build state of art solutions with brand new technologies in this fast-evolving area. You will configure large scale OpenSearch clusters, setting up ingestion pipelines to get the data into the OpenSearch. You will diagnose, troubleshoot, and resolve issues in AI model training and serving. You may also perform other duties as assigned. Build re-usable solution patterns and reference solutions / showcases that can apply across multiple customers. Be an enthusiastic, self-motivated, and a great collaborator. Be our product evangelist - engage directly with customers and partners, participate and present in external events and conferences, etc. Qualifications and experience Bachelors or master's in computer science or equivalent technical field with 10+ years of experience Able to optimally communicate technical ideas verbally and in writing (technical proposals, design specs, architecture diagrams and presentations). Demonstrated experience in designing and implementing scalable AI models and solutions for production,relevant professional experience as end-to-end solutions engineer or architect (data engineering, data science and ML engineering is a plus), with evidence of close collaborations with PM and Dev teams. Experience with OpenSearch, Vector databases, PostgreSQL and Kafka Streaming. Practical experience with setting up and finetuning large OpenSearch Clusters. Experience in setting up data ingestion pipelines with OpenSearch. Experience with search algorithms, indexing, optimizing latency and response times. Practical experience with the latest technologies in LLM and generative AI, such as parameter-efficient fine-tuning, instruction fine-tuning, and advanced prompt engineering techniques like Tree-of-Thoughts. Familiarity with Agents and Agent frameworks and Model Predictive Control (MPC) Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Strong publication record, including as a lead author or reviewer, in top-tier journals or conferences. Ability and passion to mentor and develop junior machine learning engineers. Proficient in Python and shell scripting tools. Preferred Qualifications : Masters or Bachelor's in related field with 5+ years relevant experience Experience with RAG based solutions architecture. Familiarity in OpenSearch and Vector stores as a knowledge store Knowledge of LLM and experience delivering, Generative AI And Agent models are a significant plus. Familiarity and experience with the latest advancements in computer vision and multimodal modeling is a plus. Experience with semantic search, multi-modal search and conversational search. Experience in working on a public cloud environment, and in-depth knowledge of IaaS/PaaS industry and competitive capabilities.Experience with popular model training and serving frameworks like KServe, KubeFlow, Triton etc. Experience with LLM fine-tuning, especially the latest parameter efficient fine-tuning technologies and multi-task serving technologies. Deep technical understanding of Machine Learning, Deep Learning architectures like Transformers, training methods, and optimizers. Experience with deep learning frameworks (such as PyTorch, JAX, or TensorFlow) and deep learning architectures (especially Transformers). Experience in diagnosing, fixing, and resolving issues in AI model training and serving. Career Level - IC4
Posted 3 weeks ago
8.0 - 10.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
As an engineer responsible for the SaaS Cloud Security Team, you will combine software and systems engineering to build and integrate tools and applications that will strengthen and innovate in the security space within the Oracle SaaS cloud and infrastructure. You will play a critical role in the design, development and execution of multiyear security strategies designed to improve the security and compliance posture of Oracle's SaaS services. The organization is responsible for designing, developing and deploying new cyber-security solutions and integrating with existing third-party vendor security systems, following a DevSecOps approach. The Team You thrive on collaboration. You make the people around you better. You love to collaborate with peers, engineers, operations, product managers, executives, and designers and inspire them to do their best. You are passionate and experienced as a security leader. You engage with your peers, the industry and experts to stay current on research, threats, and innovation to drive the right directions and strategies from a security infrastructure perspective. You are customer focused. Our success is based on customer satisfaction (internal and external) and how we build customer empathy in our culture, in our execution and our results. You make people successful. It is not about the I, it is about the team and making your peers and the organization successful. As leaders we focus on making our team members as productive and empowered as possible to ensure optimized execution and results. You are open and transparent. We are a team that is open, honest and shares openly with ourselves and our customers to build trust. You seek learning and feedback. You are self-critical, you proactively seek out feedback. We lead by example and share feedback and learnings in a safe and productive way that focuses on improvements, root cause analysis and never blame as the desired result. You make things happen. You own and are accountable for delivering the execution to the overall strategy and missions of the organization. And finally, you want to be part of creating dramatic and impactful change at a company that is committed to driving security innovation and world class engineering in the SaaS Cloud Security space. Career Level - IC4 Key responsibilities: You will be a key member of the team in innovating, designing, and developing security management systems for all SaaS Cloud infrastructure You will participate in the architecture and design of systems, and use modern programming practices You will be bringing your experience in building, deploying, tuning and owning distributed and resilient systems You will be working closely with other security engineering teams, to share information, advance the team's skills, experiences and capabilities You will be a driver of creating a culture of quality and attention to detail at Oracle, through your multi-faceted leadership approach including results your team delivers, your innovation, your mentorship, and your engineering of world class security management solutions You will work closely with your partners and peers in security operations who use your software to carry out their security analysis and management activities You will influence and assist in new security solutions and process definition, in line with the scale and rate of change of a multi-application SaaS environment. Required Experience: Bachelor's degree in Computer Science or related field with 8 years of professional experience as SRE / DevOps engineer Experience working with fault tolerant, highly available, high throughput, scalable distributed systems Experience with Kubernetes and its ecosystem in production, possibly at scale Experience with designing and managing data streaming platforms based on Kafka Experience with Elasticsearch / Opensearch stack. Experience with Terraform (or other IaC) Experience with Linux system administration, configuration, troubleshooting, monitoring and alerting Experience in building and managing containerized applications, preferably with Docker, Kubernetes Experience with Continuous Integration / Continuous Delivery tools Experience with any of the Cloud provides like OCI , AWS , GCP Excellent written and verbal communication skills Preferred Experience: Experience with Elastic / OpenSearch and one or more data storage solution (e.g. Oracle, Cassandra, Redis) Experience in secure network design and administration (OSI / DoD model, TCP/IP, TLS, VPN, routing, HTTP, load balancing) Experience working in security, security operations, security incident response Experience with GitOps continuous delivery Career Level - IC4
Posted 3 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Grade : 7 Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.
Posted 3 weeks ago
5 - 7 years
8 - 10 Lacs
Hyderabad
Work from Office
About The Role As a Senior Backend Engineer you will develop reliable, secure, and performant APIs that apply Kenshos AI capabilities to specific customer workflows. You will collaborate with colleagues from Product, Machine Learning, Infrastructure, and Design, as well as with other engineers within Applications. You have a demonstrated capacity for depth, and are comfortable working with a broad range of technologies. Your verbal and written communication is proactive, efficient, and inclusive of your geographically-distributed colleagues. You are a thoughtful, deliberate technologist and share your knowledge generously. Equivalent to Grade 11 Role (Internal) You will: Design, develop, test, document, deploy, maintain, and improve software Manage individual project priorities, deadlines, and deliverables Work with key stakeholders to develop system architectures, API specifications, implementation requirements, and complexity estimates Test assumptions through instrumentation and prototyping Promote ongoing technical development through code reviews, knowledge sharing, and mentorship Optimize Application Scaling: Efficiently scale ML applications to maximize compute resource utilization and meet high customer demand. Address Technical Debt: Proactively identify and propose solutions to reduce technical debt within the tech stack. Enhance User Experiences: Collaborate with Product and Design teams to develop ML-based solutions that enhance user experiences and align with business goals. Ensure API security and data privacy by implementing best practices and compliance measures. Monitor and analyze API performance and reliability, making data-driven decisions to improve system health. Contribute to architectural discussions and decisions, ensuring scalability, maintainability, and performance of the backend systems. Qualifications At least 5+ years of direct experience developing customer-facing APIs within a team Thoughtful and efficient communication skills (both verbal and written) Experience developing RESTful APIs using a variety of tools Experience turning abstract business requirements into concrete technical plans Experience working across many stages of the software development lifecycle Sound reasoning about the behavior and performance of loosely-coupled systems Proficiency with algorithms (including time and space complexity analysis), data structures, and software architecture At least one domain of demonstrable technical depth Familiarity with CI/CD practices and tools to streamline deployment processes. Experience with containerization technologies (e.g., Docker, Kubernetes) for application deployment and orchestration. Technologies We Love Python, Django, FastAPI mypy, OpenAPI RabbitMQ, Celery, Kafka OpenSearch, PostgreSQL, Redis Git, Jsonnet, Jenkins, Docker, Kubernetes Airflow, AWS, Terraform Grafana, Prometheus ML Libraries: PyTorch, Scikit-learn, Pandas
Posted 2 months ago
3 - 6 years
10 - 15 Lacs
Pune
Work from Office
Role & responsibilities Requirements- -3+ years of hands-on experience with AWS services including EMR, GLUE, Athena, Lambda, SQS, OpenSearch, CloudWatch, VPC, IAM, AWS Managed Airflow, security groups, S3, RDS, and DynamoDB. -Proficiency in Linux and experience with management tools like Apache Airflow and Terraform. Familiarity with CI/CD tools, particularly GitLab. Responsibilities- -Design, deploy, and maintain scalable and secure cloud and on-premises infrastructure. -Monitor and optimize performance and reliability of systems and applications. -Implement and manage continuous integration and continuous deployment (CI/CD) pipelines. -Collaborate with development teams to integrate new applications and services into existing infrastructure. -Conduct regular security assessments and audits to ensure compliance with industry standards. -Provide support and troubleshooting assistance for infrastructure-related issues. -Create and maintain detailed documentation for infrastructure configurations and processes.
Posted 2 months ago
8 - 12 years
11 - 15 Lacs
Hyderabad
Work from Office
Responsibilities: Design and develop our next generation of RESTful APIs and Event driven services in a distributed environment. Be hands-on in the design and development of robust solutions to hard problems, while considering scale, security, reliability, and cost Support other product delivery partners in the successful build, test, and release of solutions. Work with distributed requirements and technical stakeholders to complete shared design and development. Support the full software lifecycle of design, development, testing, and support for technical delivery. Works with both onsite (Scrum Master, Product, QA and Developers) and offshore QA team members in properly defining testable scenarios based on requirements/acceptance criteria. Be part of a fast-moving team, working with the latest tools and open-source technologies Work on a development team using agile methodologies. Understand the Business and the Application Architecture End to End Solve problems by crafting software solutions using maintainable and modular code. Participate in daily team standup meetings where you'll give and receive updates on the current backlog and challenges. Participate in code reviews. Ensure Code Quality and Deliverables Provide Impact analysis for new requirements or changes. Responsible for low level design with the team Qualifications: Required Skills: Technology Stack: Java Spring Boot, GitHub, OpenShift, Kafka, MongoDB, AWS, Serverless, Lambda, OpenSearch Hands on experience with Java 1.8 or higher, Java, Spring Boot, OpenShift, Docker, Jenkins Solid understanding of OOP, Design Patterns and Data Structures Experience in building REST APIs/Microservices Strong experience in frontend skills like React JS/Angular JS Strong understanding of parallel processing, concurrency and asynchronous concepts Experience with NoSQL databases like MongoDB, PostgreSQL Proficient in working with the SAM (Serverless Application Model) framework, with a strong command of Lambda functions using Java. Proficient in internal integration within AWS ecosystem using Lambda functions, leveraging services such as Event Bridge, S3, SQS, SNS, and others. Must have experience in Apache Spark. Experienced in internal integration within AWS using DynamoDB with Lambda functions, demonstrating the ability to architect and implement robust serverless applications. CI/CD experience: must have GitHub experience. Recognized internally as the go-to person for the most complex software engineering assignments Required Experience & Education: 11-13 years of experience Experience with vendor management in an onshore/offshore model. Proven experience with architecture, design, and development of large-scale enterprise application solutions. College degree (Bachelor) in related technical/business areas or equivalent work experience. Industry certifications such as PMP, Scrum Master, or Six Sigma Green Belt
Posted 2 months ago
12 - 16 years
40 - 45 Lacs
Hyderabad
Work from Office
Position Overview: The job profile for this position is Software Engineering Advisor - Solution Architect, which is a Band 4 Contributor Career Track Role. Excited to grow your career? We are looking for exceptional software engineers / developers in our PBM Plus Technology organization. This role requires highly technical design skills who has experience in design & developing Complex/large scale distributed systems with Micro Servies, Micro products as well Event driven architecture. Proven experience in AWS / Cloud Native, Migration, OnPrem and Hybrid applications design, development & deployment. They are expected to work closely with Subject Matter Experts, developers, and business stakeholders to ensure that application solutions meet business / customer requirements. This role will mentor and provide support to more junior engineers. Responsibilities: Be hands-on in the design and development of robust solutions to hard problems, while considering scale, security, reliability, and cost Proven experience in trouble shooting production and performance issues, providing solutions to complex problems Experience in End-to-End Solution design with OnPrem and Cloud services Able to redesign existing Business and the Application Architecture End to End Proven track of working with Enterprise Architects, Security & Governance teams Craft and Manage CI/CD, Cloud formation templates, IAM, Code & Build tools strategies Design and develop our next generation of RESTful APIs and Event driven services in a distributed environment. Determines platform architecture, technology, and tools. Confirms architecture capability and flexibility to support high availability web applications by developing analytical models and completing validation tests. Improves architecture by tracking emerging technologies and evaluating their applicability to business goals and operational requirements. Support other product delivery partners in the successful build, test, and release of solutions. Work with distributed requirements and technical stakeholders to complete shared design and development. Support the full software lifecycle of design, development, testing, and support for technical delivery. Works with both onsite (Scrum Master, Product, QA and Developers) and offshore QA team members in properly defining testable scenarios based on requirements/acceptance criteria. Be part of a fast-moving team, working with the latest tools and open-source technologies Work on a development team using agile methodologies. Solve problems by crafting software solutions using maintainable and modular code. Participate in daily team standup meetings where you'll give and receive updates on the current backlog and challenges. Participate in code reviews. Ensure Code Quality and Deliverables Provide Impact analysis for new requirements or changes. Responsible for low level design with the team Required Skills: Technology Stack: Java Spring Boot, GitHub, OpenShift, Kafka, MongoDB, AWS, Serverless, Lambda, OpenSearch Hands on experience with Java 1.8 or higher, Java, Spring Boot, OpenShift, Docker, Jenkins, Kubernetes JVM memory management, Class loader strategies Existing experience in solution designing with Event Driven Architecture Proven record of building Data flow diagrams, Deployment strategies, DB model, Landscape design, Firewall, Load balancer, OnPrem VS Coud connectivity, DR strategiesetc Solid understanding of OOP, Design Patterns and Data Structures Experience in design & building distributed systems with REST APIs / Microservices Strong understanding of parallel processing, concurrency and asynchronous concepts Experience with NoSQL databases like MongoDB, PostgreSQL Proficient in working with the SAM (Serverless Application Model) framework, with a strong command of Lambda functions using Java. Proficient in internal integration within AWS ecosystem using Lambda functions, leveraging services such as Event Bridge, S3, SQS, SNS, and others. Experienced in internal integration within AWS using DynamoDB with Lambda functions, demonstrating the ability to architect and implement robust serverless applications. CI/CD experience: must have GitHub experience. Recognized internally as the go-to person for the most complex software engineering assignments Required Experience & Education: 12+ years of experience Experience with vendor management in an onshore / offshore model. Proven experience with architecture, design, and development of large-scale enterprise application solutions. College degree (Bachelor) in related technical / business areas or equivalent work experience. Industry certifications such as TOGAF, AWS Solution Architect, PMP, Oracle Certified Professional. Etc Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate Primarily based in the Innovation Hub in Hyderabad, India in a hybrid working model (3 days WFO and 2 days WAH)
Posted 2 months ago
4 - 6 years
6 - 8 Lacs
Bengaluru
Work from Office
We are looking for a Senior Data Analytics Engineer with 4-6 years of relevant experience in analyzing the meaningful data coming from the machines and field. In this role, you will take part in the system debug and telemetry activities and use big data sets primarily for Bluetooth Solutions, sourced from hardware, firmware, software data, for building strong Business Analytics. The role will include deep dive into the Bluetooth debug and telemetry data, identifying design and flow gaps using telemetry data received, building decisive Statistical/ KPI/KEI dashboarding, smart reports, data-based competitive analysis, defining and utilizing innovative telemetry events or debug data, defining automation processes for faster results and more. Ideal Qualification and Skillset - Bachelor / Master in Engineering in Computer Science 3-5 yrs of experience in Low level debug and design 3-5 yrs of Knowledge in communication and wireless network protocols (Bluetooth, Wi-Fi ) or equivalent. 4-6 yrs experience with data analysis skills. 3-5 yrs Experience in BI based decision making 4+ yrs experience in Tools like Data-Visualization / Splunk/ Power BI /Elastic/ AWS/OpenSearch and the likes 4+ yrs Experience with Scripting with Python/ Perl / C# / CPP Very good problem-solving skills, Open to Cross-Geo Collaboration Great interpersonal, communication, and learning skills Highly motivated, self-learning ability and strong technical skill.
Posted 2 months ago
3 - 5 years
10 - 15 Lacs
Ahmedabad
Work from Office
We're seeking a talented Mid-Level Back-End Software Engineer to join our dynamic and innovative team. This role is an exciting opportunity to work on cutting-edge back-end projects and contribute to the growth. If you have a solid background in back-end development, particularly with cloud-based infrastructure, and are eager to collaborate on diverse applications in a team-oriented environment, we want to hear from you. Responsibilities Back-End Development: Build and maintain robust back-end features for our applications using NodeJS and NestJS. Work with AWS infrastructure, including Lambdas, OpenSearch, Terraform, MongoDB Atlas, S3 buckets, and Weaviate databases. Integration: Integrate external services such as Stripe, Zoom, and Auth0, as well as our internal analytics lambdas, to enhance application functionalities. Collaboration: Engage in daily stand-up meetings and collaborate with UX designers to ensure back-end capabilities align with proposed features. Occasionally participate in technical calls to assess vendors and their products. Agile Environment: Actively contribute to our agile development process, participating in code reviews and providing valuable input for continuous improvement. Qualifications 3-5 years of professional experience in back-end development. Strong proficiency in NodeJS, NestJS, and experience with AWS services (Lambdas, OpenSearch, Terraform). Experience with MongoDB Atlas, S3 buckets, and Weaviate databases. Familiarity with integrating external services (Stripe, Zoom, Auth0). Knowledge of cloud architecture, preferably AWS. Experience working in an agile development environment. Excellent problem-solving and communication skills. Ability to work effectively in a team setting. Benefits: Gain real-world experience in corporate functioning. Learn to collaborate with diverse teams and meet deadlines in a professional environment. Access various learning and development programs to explore your passion. Work in a fast-paced, rapidly expanding tech team undergoing a revamp, with exposure to advanced technology and tools relevant to your role.
Posted 2 months ago
5 - 10 years
25 - 27 Lacs
Gurgaon
Work from Office
5+ years of experience in AWS, CICD and DevOps tools Strong understanding of Cloud-based architecture & cloud operations Working understanding of Infrastructure and application monitoring platforms Datadog, Opensearch, ELK Stack etc. ? Good understanding of performance and capacity monitoring. Its configuration & optimization 5+ years of experience in setting up strategy, process and checks for resiliency in AWS Knowledge of Linux, shell scripting, Python is preferred Working knowledge of terraform is good to have Excellent problem-solving skills and attention to detail. Ability to work independently as well as collaboratively in
Posted 2 months ago
7 - 9 years
0 Lacs
Pune
Hybrid
In-depth knowledge of AWS services including EC2, S3, RDS, Lambda, ACM, SSM, and IAM. Experience with Kubernetes (EKS) and Elastic Container Services (ECS)for orchestration and deployment of microservices. Engineers are expected to be able to execute upgrades independently. Cloud Architecture : Proficient knowledge on AWS advanced networking services including CloudFront, Transit Gateway Monitoring & Logging: Knowledge of AWS CloudWatch, CloudTrail, OpenSearch and Grafana monitoring tools. Security Best Practices : Understanding of AWS security features and compliance standards. API: RestAPI/OneAPI Relevant experience mandatory Infrastructure as Code (IaC): Proficient in AWS CloudFormation and Terraform for automated provisioning. Scripting Languages: Proficient in common languages (PowerShell, Python and Bash) for automation tasks. CI/CD Pipelines: Familiar with tools like Azure DevOps Pipelines for automated testing and deployment. Relevant Experience - A minimum of 4-5 years experience in a comparable Cloud Engineer Role Nice to Have: Knowledge/Hands-On Azure services Agile Frameworks: Proficient knowledge about Agile ways of working (SCRUM, SAFe) Certification In case of AWS at least:: Certified Cloud Practitioner + Certified Solutions Architect Associate + Certified Solution Architect Professional. In case of Azure at least: Microsoft Certified: Azure Solutions Architect Expert Mindset: Platform engineers must focus on automating activities where possible, to ensure stability, reliability and predictability.
Posted 3 months ago
7 - 10 years
15 - 25 Lacs
Bengaluru
Hybrid
Bachelors degree in technical discipline with at least seven years professional experience in network, systems, or IT Infrastructure Engineering. Equivalent experience and or certifications will be considered as well. Demonstrates a proactive and self-starting work style with a strong sense of urgency; capable of making independent decisions while adhering to established best practices. Self-motivated and adept at independently prioritizing and managing multiple client issues simultaneously. Strong troubleshooting skills and ability to analyze and resolve complex technical issues. Proficient in diagnosing 'root causes' rather than treating symptoms, with the ability to anticipate the impact on additional processes and systems. Hands-on experience in understanding and managing client-server and microservice-style software stacks. Expertise in working with on-prem datacenters, cloud infrastructure, including AWS and Azure platforms. Experience implementing and administering with cisco switches and firewalls. Advanced grasp of layer 3-7 networking concepts & common protocols. Hands on experience of VMware administration and SAN configuration. Experience developing automation in multiple technologies including ansible, C#, Python, BASH, etc. Familiarity with cloud services such as AWS, Azure, and their associated tools. Experience with containerization and associated platforms. Experience with AWS EKS, and Azure Kubernetes Service. Experience with DKIM and DEMARC. Proficient in Rabbit MQ administration and configuration. Experience with monitoring tools like DataDog for system performance analysis. In-depth knowledge of Linux and Windows server environments. Strong understanding of Linux Operating Systems, specifically ubuntu. Understanding and experience with Windows Server clustering and administration. Experience in certificate management and patching procedures. Experience with cloud technologies such as AWS and migrating resources from On-Prem data centers to the cloud. Experience deploying and supporting client-server and microservice-style software stacks. Hands-on experience with deployment tools like Rancher, Helm charts, and ArgoCD. Solid understanding of microservices architecture and network configurations. Proficiency in configuration management tools such as Ansible and Terraform in cloud environments. Experience with ELK (Elasticsearch, Logstash, Kibana) and OpenSearch for log analysis and search. Experience in process automation through scripting and infrastructure as code. Passionate about automation, troubleshooting and solving challenging problems. Works cross-functionally with other teams on improvements to existing infrastructure to increase system stability and performance.
Posted 3 months ago
5 - 8 years
30 - 35 Lacs
Gurgaon, Jaipur
Work from Office
An "Engineering Manager - .NET and AWS" is a senior lead role in software development, focusing on managing a team of engineers who specialize in building microservices and applications using the .NET technology stack and Amazon Web Services (AWS) cloud infrastructure. This role combines technical expertise with leadership and management responsibilities. Here are key responsibilities and skills associated with this role: Responsibilities: 1. Team Leadership: Lead and manage a team of engineers, providing guidance, coaching, and mentorship to help them meet their professional goals. 2. Project Management: Oversee project planning, execution, and delivery, ensuring that projects are completed on time and within budget. 3. Architecture and Design: Collaborate with the Solution Architect to define architecture, design patterns, and best practices for developing .NET-based microservices on AWS. 4. Microservices Development: Provide technical direction and expertise for developing microservices and APIs using .NET technologies, such as ASP.NET Core. 5. AWS Integration: Oversee the integration of AWS cloud services into the architecture, such as and Redis, Opensearch, AWS Event Hub, Amazon ECS, AWS Lambda, and other relevant AWS offerings. 6. Scalability and Performance: Ensure that applications and microservices are designed for scalability and optimized for performance by utilizing AWS auto- scaling and load balancing. 7. Security and Compliance: Implement security best practices and compliance standards within the microservices and AWS infrastructure. 8. Resource Management: Manage allocation of resources effectively, and make strategic decisions to optimize resource usage. 9. Stakeholder Communication: Communicate with business stakeholders, product managers, and cross-functional teams to align engineering efforts with business objectives. 10. Mentoring and Training: Foster a culture of continuous learning by providing training and development opportunities for team members. Skills and Qualifications: 1. .NET Stack: Proficiency in .NET technologies, particularly C#, ASP.NET Core, and ASP.NET REST API. 2. AWS Services: In-depth knowledge of AWS services and their use cases, including EC2, Lambda, API Gateway, RDS, DynamoDB, S3, Opensearch, Redis and more. 3. Microservices Architecture: Strong understanding of microservices (serverless) architecture, patterns, and best practices. 4. API Design: Expertise in designing RESTful APIs and maintaining API documentation. 5. Cloud Computing: A comprehensive understanding of cloud computing concepts and experience in AWS infrastructure management. 6. Security and Compliance: Knowledge of security best practices and compliance standards relevant to AWS environments. 7. Containerization: Familiarity with containerization technologies, Docker, and container orchestration using AWS ECS or EKS. 8. Project Management: Proficiency in project management methodologies and tools for effective project planning and execution. 9. Leadership and Communication: Strong leadership skills, excellent communication, and the ability to collaborate with cross-functional teams. 10. Agile Methodology: Ability to work in Agile development environments, leading Agile teams and adapting to changing requirements. Hands on exp .Net & AWS , Work from Office Only, Immediate Joiner or Early Joiner Only
Posted 3 months ago
5 - 8 years
30 - 35 Lacs
Gurgaon, Jaipur
Work from Office
An " Manager - .Python and AWS" is a senior lead role in software development, focusing on managing a team of engineers who specialize in building microservices and applications using the Python technology stack and Amazon Web Services (AWS) cloud infrastructure. This role combines technical expertise with leadership and management responsibilities. Here are key responsibilities and skills associated with this role: Responsibilities: 1. Team Leadership: Lead and manage a team of engineers, providing guidance, coaching, and mentorship to help them meet their professional goals. 2. Project Management: Oversee project planning, execution, and delivery, ensuring that projects are completed on time and within budget. 3. Architecture and Design: Collaborate with the Solution Architect to define architecture, design patterns, and best practices for developing Python -based microservices on AWS. 4. Microservices Development: Provide technical direction and expertise for developing microservices and APIs using Python technologies, 5. AWS Integration: Oversee the integration of AWS cloud services into the architecture, such as and Redis, Opensearch, AWS Event Hub, Amazon ECS, AWS Lambda, and other relevant AWS offerings. 6. Scalability and Performance: Ensure that applications and microservices are designed for scalability and optimized for performance by utilizing AWS auto- scaling and load balancing. 7. Security and Compliance: Implement security best practices and compliance standards within the microservices and AWS infrastructure. 8. Resource Management: Manage allocation of resources effectively, and make strategic decisions to optimize resource usage. 9. Stakeholder Communication: Communicate with business stakeholders, product managers, and cross-functional teams to align engineering efforts with business objectives. 10. Mentoring and Training: Foster a culture of continuous learning by providing training and development opportunities for team members. Skills and Qualifications: 1. Proficiency in Python. technologies, particularly C#, and REST API. 2. AWS Services: In-depth knowledge of AWS services and their use cases, including EC2, Lambda, API Gateway, RDS, DynamoDB, S3, Opensearch, Redis and more. 3. Microservices Architecture: Strong understanding of microservices (serverless) architecture, patterns, and best practices. 4. API Design: Expertise in designing RESTful APIs and maintaining API documentation. 5. Cloud Computing: A comprehensive understanding of cloud computing concepts and experience in AWS infrastructure management. 6. Security and Compliance: Knowledge of security best practices and compliance standards relevant to AWS environments. 7. Containerization: Familiarity with containerization technologies, Docker, and container orchestration using AWS ECS or EKS. 8. Project Management: Proficiency in project management methodologies and tools for effective project planning and execution. 9. Leadership and Communication: Strong leadership skills, excellent communication, and the ability to collaborate with cross-functional teams. 10. Agile Methodology: Ability to work in Agile development environments, leading Agile teams and adapting to changing requirements. Hands on exp Python & AWS, PYTHON, MySQL, FLASK , DJANGO, REST, JSON, Immediate Joiner
Posted 3 months ago
7 - 12 years
9 - 14 Lacs
Chennai
Work from Office
AWS Cloud Expertise: Solid experience with AWS cloud infrastructure, specifically using CDK or CloudFormation templates. Infrastructure Management: Responsible for planning, implementing, and scaling AWS cloud infrastructure. CI/CD Pipelines: Implement and maintain continuous integration/continuous delivery (CI/CD) pipelines for automated infrastructure provisioning. Collaboration: Work closely with architecture and engineering teams to design and implement scalable software services. Data & AI Platforms: Experience designing and building data and AI platform environments on AWS, utilizing services like EMR, EKS, EC2, ELB, RDS, Lambda, API Gateway, Kinesis, S3, DynamoDB, ECS, OpenSearch, and SageMaker. Cloud-Native Applications: Experience building and maintaining cloud-native applications. Automation Skills: Strong automation skills, particularly with Python. DevOps Tools: Proficiency with DevOps tools such as Docker, GitHub, GitHub Actions, Kubernetes, and SonarQube. Monitoring: Experience with monitoring solutions like CloudWatch, the ELK stack, and Prometheus. Infrastructure as Code (IaC): Understanding and experience writing Infrastructure-as-Code (IaC) using tools like CloudFormation or Terraform. Scripting: Proficient in script development and various scripting languages. Troubleshooting: Experience troubleshooting distributed systems. Communication: Excellent communication and collaboration skills. Key Skills: AWS Services, GitHub, GitHub Actions, Docker, Groovy Script, Python, and TypeScript
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Skill Splunk Junior Analyst Years of experience 5+ years Work Location PAN India (NTT DATA locations) Mandatory Skills Splunk, Cribble, OpenSearch, log collection Key Responsibilities: Excellent incident and request handling Help attain and then maintain SLAs and KPIs Run preconfigured reports to support business and cyber requests Ensure system/application alerts are acknowledged and actioned Escalate issues to STO management and leadership as needed Support patching, upgrades, and configuration changes Assist in managing high-priority incidents and requests related to log collection Ensure compliance with audit controls and evidence requirements Experience: Basic understanding of networking concepts and protocols, including TCP/IP, DNS, firewalls Basic understanding of Unix/Linux operating system management Familiarity with reporting tools such as PowerBi, PowerPoint, and Excel 1-3 years of experience in a security operations role Confidence in troubleshooting Good communication skills Other Inputs(E.g. Rotational Shifts etc) Working hours - for now standard hours, if required based on client requirements we may need to work in CET timings and in on call support. Initial 3 to 4 weeks work from office then Hybrid working
Posted 3 months ago
5 - 7 years
8 - 10 Lacs
Hyderabad
Work from Office
About The Role As a Senior Backend Engineer you will develop reliable, secure, and performant APIs that apply Kenshos AI capabilities to specific customer workflows. You will collaborate with colleagues from Product, Machine Learning, Infrastructure, and Design, as well as with other engineers within Applications. You have a demonstrated capacity for depth, and are comfortable working with a broad range of technologies. Your verbal and written communication is proactive, efficient, and inclusive of your geographically-distributed colleagues. You are a thoughtful, deliberate technologist and share your knowledge generously. Equivalent to Grade 11 Role (Internal) You will: Design, develop, test, document, deploy, maintain, and improve software Manage individual project priorities, deadlines, and deliverables Work with key stakeholders to develop system architectures, API specifications, implementation requirements, and complexity estimates Test assumptions through instrumentation and prototyping Promote ongoing technical development through code reviews, knowledge sharing, and mentorship Optimize Application Scaling: Efficiently scale ML applications to maximize compute resource utilization and meet high customer demand. Address Technical Debt: Proactively identify and propose solutions to reduce technical debt within the tech stack. Enhance User Experiences: Collaborate with Product and Design teams to develop ML-based solutions that enhance user experiences and align with business goals. Ensure API security and data privacy by implementing best practices and compliance measures. Monitor and analyze API performance and reliability, making data-driven decisions to improve system health. Contribute to architectural discussions and decisions, ensuring scalability, maintainability, and performance of the backend systems. Qualifications At least 5+ years of direct experience developing customer-facing APIs within a team Thoughtful and efficient communication skills (both verbal and written) Experience developing RESTful APIs using a variety of tools Experience turning abstract business requirements into concrete technical plans Experience working across many stages of the software development lifecycle Sound reasoning about the behavior and performance of loosely-coupled systems Proficiency with algorithms (including time and space complexity analysis), data structures, and software architecture At least one domain of demonstrable technical depth Familiarity with CI/CD practices and tools to streamline deployment processes. Experience with containerization technologies (e.g., Docker, Kubernetes) for application deployment and orchestration. Technologies We Love Python, Django, FastAPI mypy, OpenAPI RabbitMQ, Celery, Kafka OpenSearch, PostgreSQL, Redis Git, Jsonnet, Jenkins, Docker, Kubernetes Airflow, AWS, Terraform Grafana, Prometheus ML Libraries: PyTorch, Scikit-learn, Pandas
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2