Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
haryana
On-site
You will be joining Wipro Limited, a leading technology services and consulting company with a focus on developing innovative solutions to meet the complex digital transformation needs of clients. With a vast portfolio of capabilities in consulting, design, engineering, and operations, you will play a crucial role in helping clients achieve their ambitious goals and create sustainable businesses. Wipro has a global presence with over 230,000 employees and business partners in 65 countries, committed to supporting customers, colleagues, and communities in navigating the ever-changing business landscape. For more information, please visit www.wipro.com. As a Technical Lead specializing in AWS and DevOps, you will be expected to possess the following skills: - Proficiency in Terraform, AWS, and DevOps is required - AWS Certified Solution Architect - Associate - AWS Certified DevOps Engineer - Professional Your responsibilities as an AWS/DevOps Analyst will include: - Having more than 6 years of IT experience - Setting up and maintaining ECS solutions - Designing and building AWS solutions using various services such as VPC, EC2, WAF, ECS, ALB, IAM, KMS, ACM, Secret Manager, S3, CloudFront, etc. - Working with SNS, SQS, and EventBridge - Setting up and maintaining databases such as RDS, Aurora DB, Postgres DB, DynamoDB, and Redis - Configuring AWS Glue jobs and AWS Lambda - Establishing CI/CD pipelines using Azure DevOps - Utilizing GitHub for source code management - Building and maintaining cloud-native applications - Experience with container technologies like Docker - Configuring logging and monitoring solutions like CloudWatch and OpenSearch - Managing system configurations using Terraform and Terragrunt - Ensuring system security through best-in-class security solutions - Identifying and recommending process and architecture improvements - Troubleshooting distributed systems effectively In addition to technical skills, the following interpersonal skills are essential: - Strong communication and collaboration abilities - Team player mentality - Analytical and problem-solving skills - Familiarity with Agile methodologies - Ability to train others on procedural and technical topics Mandatory Skills: Cloud App Dev Consulting Experience Range: 5-8 Years At Wipro, we are reinventing ourselves to meet the demands of the digital age. We are looking for individuals who are inspired by reinvention and are committed to continuous personal and professional growth. Join us in building a modern Wipro that is a pioneer in digital transformation. We encourage individuals with disabilities to apply and be a part of our purpose-driven organization.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Sr. Software Reliability Engineer at Autodesk, you will have the exciting opportunity to join our platform team in Pune, India, focusing on our Cloud services. You will be a key contributor to Autodesk Platform Services (APS), a cloud service platform that facilitates custom and pre-built applications, integrations, and innovative solutions. APS offers APIs and web services to leverage our customers" Design and Make data, connecting custom workflows and end-to-end solutions. This role allows you to work directly on the APIs and services that impact millions of Autodesk product users. Reporting to the Sr. Manager of Engineering, you will play a significant role in ensuring the smooth operation of Autodesk Platform APIs, which serve as the foundation for next-generation design apps. In this hybrid position, you will be part of an Agile product team dedicated to developing top-tier cloud software applications and services. Collaboration will be a key aspect of your role as you work with local and remote colleagues from diverse backgrounds such as business, engineering, operations, and support. Within this dynamic environment, you will have the opportunity to work alongside highly motivated and skilled software engineers. As a member of the team, you will engage in continuous learning, teaching, and problem-solving to deliver innovative solutions to complex engineering challenges. Your responsibilities will include making critical decisions, addressing challenging issues, and enhancing the platform's reliability, resiliency, and scalability. Your role will involve configuring and enhancing cloud infrastructure to ensure service availability, resiliency, performance, and cost efficiency as load times increase over time. You will also be responsible for maintaining system updates for security compliance, driving service level objectives (SLOs), participating in technical discussions and decision-making, building tools for operational efficiency, troubleshooting technical issues, and contributing to on-call rotations for timely service recovery. To qualify for this position, you should hold a Bachelor's Degree in a related field such as Computer Science, possess at least 6 years of software engineering experience with a minimum of 3 years as a Site Reliability Engineer, demonstrate familiarity with Elasticsearch/OpenSearch, AWS deployment, Continuous Delivery methodologies, resiliency patterns, and cloud security. Proficiency in using observability tools like Grafana, Open Telemetry, or Prometheus, and experience with security compliance standards such as SOC2 are also required. The ideal candidate for this role is a team player with a strong focus on delivering comprehensive solutions. You should have a passion for continual learning, be adept at presenting software demos, and be capable of addressing questions regarding project progress effectively. Join us at Autodesk, where we empower innovators to turn their ideas into reality, shaping a better world for all. Our inclusive culture guides our interactions with each other, our customers, and partners, defining our positive impact on the world. If you are ready to embrace new challenges and contribute to meaningful work, we invite you to be an integral part of our team. Autodesk offers a competitive compensation package, including base salaries, annual cash bonuses, commissions for sales roles, stock grants, and comprehensive benefits. We are committed to fostering a culture of diversity and belonging, ensuring that everyone has the opportunity to thrive and succeed.,
Posted 2 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant- Generative AI - Application development Senior Developer We are looking for a Senior Application Developer to join our product engineering team. This role requires hands-on experience in designing and developing scalable application components with a strong focus on API development, middleware orchestration, and data transformation workflows. You will be responsible for building foundational components that integrate data pipelines, orchestration layers, and user interfaces, enabling next-gen digital and AI-powered experiences. Key Responsibilities: Design, develop, and manage robust APIs and middleware services using Python frameworks like FastAPI and Uvicorn , ensuring scalable and secure access to platform capabilities. Develop end-to-end data transformation workflows and pipelines using LangChain , spacy , tiktoken , presidio-analyzer, and llm -guard, enabling intelligent content and data processing. Implement integration layers and orchestration logic for seamless communication between data sources, services, and UI using technologies like OpenSearch, boto3, requests-aws4auth, and urllib3. Work closely with UI/UX teams to integrate APIs into modern front-end frameworks such as ReactJS, Redux Toolkit, and Material UI. Build configurable modules for ingestion, processing, and output using Python libraries like PyMuPDF , openpyxl , and Unidecode for handling structured and unstructured data. Implement best practices for API security, data privacy, and anonymization using tools like presidio-anonymizer and llm -guard. Drive continuous improvement in performance, scalability, and reliability of the platform architecture. Qualifications we seek in you: Minimum Qualifications Experience in software development in enterprise/ web applications Languages & Frameworks: Python, JavaScript/TypeScript, FastAPI , ReactJS, Redux Toolkit Libraries & Tools: langchain , presidio-analyzer, PyMuPDF , spacy, rake- nltk , inflection, openpyxl , tiktoken APIs & Integration: FastAPI , requests, urllib3, boto3, opensearch-py , requests-aws4auth UI/UX: ReactJS, Material UI, LESS Cloud & DevOps: AWS SDKs, API gateways, logging, and monitoring frameworks (optional experience with serverless is a plus) Preferred Qualifications: Strong understanding of API lifecycle management, REST principles, and microservices. Experience in data transformation, document processing, and middleware architecture. Exposure to AI/ML or Generative AI workflows using LangChain or OpenAI APIs. Prior experience working on secure and compliant systems involving user data. Experience in CI/CD pipelines, containerization (Docker), and cloud-native deployments (AWS preferred). Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Pune
Hybrid
Please find the JD below : 1. AWS, EKS, OpenSearch, DynamoDb, Terraform, Dockerfile, CICD, Datadog, Helm
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
You should have 4 to 6 years of experience in Enterprise web application development using technologies such as Python, Flask, REST, React.js, Docker, AWS, and more. You must be capable of developing and managing microfrontends with Webpack Module Federation. Experience with Agile development processes, SCM tools like GitHub or Bitbucket, infrastructure as code development, and CICD development is necessary. Additionally, familiarity with configuring and utilizing search services like Opensearch or Elasticsearch is required. The ideal candidate should be open to learning new technologies and skills as needed.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Atomicwork is dedicated to revolutionizing the digital workplace experience by integrating people, processes, and platforms through AI automation. Our team is focused on developing a cutting-edge service management platform that empowers growing businesses to streamline operations and achieve business success. We are currently looking for a talented and driven Data Pipeline Engineer to join our team. As a Data Pipeline Engineer, your main responsibility will be to design, construct, and maintain scalable data pipelines that support our enterprise search capabilities. Your efforts will ensure that data from diverse sources is efficiently ingested, processed, and indexed, facilitating seamless and secure search experiences across the organization. We prioritize practical skills and a proactive approach over formal qualifications. While proficiency in programming languages like Python, Java, or Scala is essential, experience with data pipeline frameworks such as Apache Airflow and tools like Apache NiFi is highly valued. Familiarity with search platforms like Elasticsearch or OpenSearch, as well as knowledge of data ingestion, transformation, and indexing processes, are also crucial for this role. Additionally, a strong understanding of enterprise search concepts, data security best practices, and cloud platforms like AWS, GCP, or Azure is required. Experience with Model Context Protocol (MCP) would be advantageous. Your responsibilities as a Data Pipeline Engineer will include designing, developing, and maintaining data pipelines for enterprise search applications, implementing data ingestion processes from various sources, developing data transformation and enrichment processes, integrating with search platforms, ensuring data quality and integrity, monitoring pipeline performance, collaborating with cross-functional teams, implementing security measures, documenting pipeline architecture, processes, and best practices, and staying updated with industry trends in data engineering and enterprise search. At Atomicwork, you have the opportunity to contribute to the company's growth and development, from conception to execution. Our cultural values emphasize agency, taste, ownership, mastery, impatience, and customer obsession, fostering a positive and innovative workplace environment. We offer competitive compensation and benefits, including a fantastic team, convenient offices across five cities, paid time off, comprehensive health insurance, flexible allowances, and annual outings. If you are excited about the opportunity to work with us, click on the apply button to begin your application. Answer a few questions about yourself and your work, and await further communication from us regarding the next steps. If you have any additional queries or information to share, please feel free to reach out to us at careers@atomicwork.com.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As an AWS Data Engineer with a focus on Databricks, you will play a crucial role in designing, developing, and optimizing scalable data pipelines. Your expertise in Databricks, PySpark, and AWS development will be key in leading technical efforts and driving innovation across the stack. Your responsibilities will include developing and optimizing data pipelines using Databricks (PySpark), implementing AWS AppSync and Lambda-based APIs for integration with Neptune and OpenSearch, collaborating with React developers and backend teams to enhance architecture, ensuring secure development practices especially around IAM roles and AWS security, driving performance, scalability, and reliability improvements, and taking full ownership of assigned tasks and deliverables. To excel in this role, you should have strong experience in Databricks and PySpark for building data pipelines, proficiency in AWS Neptune and OpenSearch, hands-on experience with AWS AppSync and Lambda functions, a solid grasp of IAM, CloudFront, and API development in AWS, familiarity with React.js front-end applications (a plus), strong problem-solving, debugging, and communication skills, and the ability to work independently and drive innovation. Preferred qualifications include AWS Certifications (Solutions Architect, Developer Associate, or Data Analytics Specialty) and production experience with graph databases and search platforms. This position offers a great opportunity to work with cutting-edge technologies, collaborate with talented teams, and make a significant impact on data engineering projects.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You will be responsible for implementing and managing CI/CD pipelines, container orchestration, and cloud services to enhance our software development lifecycle. Collaborate with development and operations teams to streamline processes and improve deployment efficiency. Implement and manage CI/CD tools such as GitLab CI, Jenkins, or CircleCI. Utilize Docker and Kubernetes (k8s) for containerization and orchestration of applications. Write and maintain scripts in at least one scripting language (e.g., Python, Bash) to automate tasks. Manage and deploy applications using cloud services (e.g. AWS, Azure, GCP) and their respective management tools. Understand and apply network protocols, IP networking, load balancing, and firewalling concepts. Implement infrastructure as code (IaC) practices to automate infrastructure provisioning and management. Utilize logging and monitoring tools (e.g., ELK stack, OpenSearch, Prometheus, Grafana) to ensure system reliability and performance. Familiarize with GitOps practices using tools like Flux or ArgoCD for continuous delivery. Work with Helm and Flyte for managing Kubernetes applications and workflows. Bachelors or masters degree in computer science, or a related field. Proven experience in a DevOps engineering role. Strong background in software development and system administration. Experience with CI/CD tools and practices. Proficiency in Docker and Kubernetes. Familiarity with cloud services and their management tools. Understanding of networking concepts and protocols. Experience with infrastructure as code (IaC) practices. Familiarity with logging and monitoring tools. Knowledge of GitOps practices and tools. Experience with Helm and Flyte is a plus. Preferred Qualifications: Experience with cloud-native architectures and microservices. Knowledge of security best practices in DevOps and cloud environments. Understanding database management and optimization (e.g., SQL, NoSQL). Familiarity with Agile methodologies and practices. Experience with performance tuning and optimization of applications. Knowledge of backup and disaster recovery strategies. Familiarity with emerging DevOps tools and technologies.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Atomicwork is dedicated to revolutionizing the digital workplace experience by merging people, processes, and platforms through AI automation. The team is currently focused on developing a cutting-edge service management platform that empowers businesses to streamline operations and achieve success. We are in search of a talented and driven Data Pipeline Engineer to become a part of our dynamic team. As a Data Pipeline Engineer, you will play a pivotal role in designing, constructing, and managing scalable data pipelines that support our enterprise search capabilities. Your main responsibility will involve ensuring that data from diverse sources is effectively ingested, processed, and indexed to facilitate seamless and secure search experiences throughout the organization. Qualifications: - Proficiency in programming languages like Python, Java, or Scala. - Strong expertise in data pipeline frameworks and tools such as Apache Airflow and Apache NiFi. - Experience with search platforms like Elasticsearch or OpenSearch. - Familiarity with data ingestion, transformation, and indexing processes. - Understanding of enterprise search concepts including crawling, indexing, and query processing. - Knowledge of data security and access control best practices. - Experience with cloud platforms like AWS, GCP, or Azure and related Backend Engineer - Search/Integrations services. - Knowledge of Model Context Protocol (MCP) is advantageous. - Strong problem-solving and analytical skills. - Excellent communication and collaboration abilities. Responsibilities: - Design, develop, and maintain data pipelines for enterprise search applications. - Implement data ingestion processes from various sources like databases, file systems, and APIs. - Develop data transformation and enrichment processes to prepare data for indexing. - Integrate with search platforms to efficiently index and update data. - Ensure data quality, consistency, and integrity throughout the pipeline. - Monitor pipeline performance and troubleshoot issues as they arise. - Collaborate with cross-functional teams including data scientists, engineers, and product managers. - Implement security measures to safeguard sensitive data during processing and storage. - Document pipeline architecture, processes, and best practices. - Stay abreast of industry trends and advancements in data engineering and enterprise search. At Atomicwork, you have the opportunity to contribute to the company's growth from conceptualization to production. Our cultural values emphasize self-direction, attention to detail, ownership, continuous improvement, impatience for progress, and customer obsession. We offer competitive compensation and benefits including a fantastic team environment, well-located offices in five cities, unlimited sick leaves, comprehensive health insurance with 75% premium coverage, flexible allowances, and annual outings for team bonding. To apply for this role, click on the apply button, answer a few questions about yourself and your work, and await further communication from us regarding the next steps. For any additional inquiries, feel free to reach out to careers@atomicwork.com.,
Posted 2 weeks ago
5.0 - 8.0 years
14 - 22 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Hiring For Top IT Company- Designation: Python Developer Skills: Python + Pyspark Location :Bang/Mumbai Exp: 5-8 yrs Best CTC 9783460933 9549198246 9982845569 7665831761 6377522517 7240017049 Team Converse
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
You will be an essential part of a dynamic team that develops web services and APIs using Agile methodology. Your responsibilities will include writing reusable, testable, and efficient code, as well as ensuring code quality, organization, and automation. By using your skills and creativity, you will proactively identify and address defects to prevent issues from arising. Collaboration with team members to define, design, and implement new features will be a key aspect of your role. It will be your responsibility to maintain high performance, quality, and responsiveness in web services. Additionally, you will continuously explore, assess, and integrate new technologies to enhance development efficiency. To be successful in this role, you must have at least 2 years of prior experience as a Node.js developer. Proficiency in Node.js using express/koa or Restify, along with solid knowledge of JavaScript, is required. You should also have practical experience working with MongoDB and Redis, and be adept at creating secure RESTful web services and APIs. A strong understanding of Data Structures & Algorithms, as well as experience with System Design & Architecture, will be beneficial. Familiarity with integrating logging and monitoring systems, using Git for source version control, and possessing excellent problem-solving and debugging skills are essential. Furthermore, a good grasp of Microservice Architecture is crucial for this role. Desirable qualifications include experience in setting up CI/CD pipelines using Jenkins, working with cloud technologies such as AWS, and familiarity with ElasticSearch/Solr or AWS OpenSearch. Experience in Consumer Web Development for High-Traffic, Public Facing web applications, and the ability to provide technical insight for new initiatives across different disciplines will be advantageous. Additionally, experience with relational databases, SQL knowledge, and a total of 3 to 5 years of relevant experience are preferred for this position.,
Posted 3 weeks ago
6.0 - 10.0 years
6 - 10 Lacs
Tiruchirapalli
Work from Office
Role Overview: We are seeking a Technical Product Manager to lead and manage the entire software product development lifecycle from concept to delivery. This role is hands-on and requires a strong engineering background in backend development and modern data technologies, with demonstrated experience in building and delivering complex software products. You will work closely with internal stakeholders, developers, QA, and DevOps teams to ensure each product is planned, developed, tested, and released with precision. Key Responsibilities: Project & Product Lifecycle Management Lead and manage the full product development lifecycle: planning, requirement gathering, validation, estimation, development, testing, and release. Collaborate with stakeholders to define product scope, technical feasibility, and delivery timelines. Conduct technical validation of requirements, helping guide architecture and technology decisions. Own project budgeting, resource allocation, and delivery tracking. Establish and manage sprint plans, task assignments, and ensure timely execution across development teams. Engineering Oversight & Technical Leadership Provide technical leadership to the software development team in: - Node.js, Express.js, React.js, MongoDB, Radis DB - Time-series databases (e.g., OpenSearch, ClickHouse, or Cassandra) experience with any one is required - RESTful API development, WebSocket-based communication Basic understanding of AI/ML concepts and how they integrate into modern applications Assist in code reviews, technical issue resolution, and performance optimization Ensure architectural alignment with business and scalability goals. Process Governance & Delivery Assurance Manage task tracking, sprint velocity, QA cycles, and release planning. Implement robust bug tracking, test coverage reviews, and UAT readiness. Oversee the successful delivery of software builds, ensuring they meet quality and timeline expectations. Prepare and maintain project documentation and release notes. Stakeholder Communication & Reporting Serve as the single point of contact between engineering and leadership for project progress, blockers, and releases. Provide weekly progress reports, metrics, and risk escalations. Facilitate cross-functional communication with QA, DevOps, design, and support teams. Required Qualifications (Must-Have Skills) 6 - 10 years of experience in software product development, including 3+ years in a product/project management or technical lead role. Strong hands-on experience in Node.js, Express.js, React.js and MongoDB. Experience with at least one time-series database (OpenSearch, ClickHouse, or Cassandra). Solid understanding of RESTful APIs, WebSocket protocols, and microservice development. Familiarity with core AI/ML concepts and integration patterns in modern applications. Proven success in delivering at least two software products end-to-end to enterprise or mid-market clients. Strong understanding of Agile/Scrum, sprint planning, backlog grooming, and release cycles. Preferred Skills Experience in building SaaS-based platforms, monitoring tools, or infrastructure management products. Familiarity with cloud hosting environments (AWS, GCP, Azure) and DevOps practices (CI/CD pipelines, Docker/K8s). Exposure to observability stacks, log monitoring, or AI/MLOps products. Working knowledge of QA automation and performance testing tools. Key Attributes Strong ownership and execution mindset. Ability to balance technical depth with product vision. Excellent communication, task management, and stakeholder coordination skills. Comfortable working in fast-paced, evolving product environments.
Posted 3 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
Kochi
Work from Office
Hiring a developer with hands-on experience in frontend (React.js / Node.js or any web application development) and backend technologies (Elasticsearch / OpenSearch). Work on modern, scalable web-based solutions.
Posted 1 month ago
4.0 - 9.0 years
6 - 16 Lacs
Bengaluru
Work from Office
Job Description: Backend Developer Location: South Bengaluru, Karnataka, India Experience: 4 to 9 years Salary Range: 6 to 16 LPA Working Days: Monday to Friday (5-day work week) Work Mode: Work from Office only (No hybrid or remote option) About the Opportunity Join a product-driven tech team building a scalable, intelligent platform that simplifies high-value ownership journeys. This role is ideal for developers who want to work on cloud-native architectures, solve meaningful backend challenges, and drive real-world user impact. Youll be developing and optimizing backend systems using modern stacks and AWS services within a fast-paced, agile environment. Key Responsibilities - Design, develop, and deploy cloud-native applications on AWS using Lambda, AppSync, EventBridge, DynamoDB, and OpenSearch. - Build robust RESTful APIs and ensure smooth integration with internal and external systems. - Migrate and modernize existing full-stack applications to a cloud-native setup. - Develop full-stack solutions using TypeScript, JavaScript, Node.js, and Python. - Monitor and optimize database health using MongoDB and PostgreSQL. - Maintain CI/CD pipelines using AWS CodePipeline and related tools. - Ensure scalability, security, and performance of backend systems in a dynamic, fast-moving environment. Technical Skills Required - Strong experience with Python, FastAPI, Node.js, and TypeScript. - Proficiency with MongoDB, PostgreSQL, and familiarity with DynamoDB. - Solid knowledge of AWS services including Lambda and serverless architecture. - Good grasp of RESTful API design and cloud-native principles. - Exposure to front-end tech like Angular or JavaScript is a plus. - Experience working with CI/CD tools and Agile methodologies (Scrum). What You Bring - 26 years of experience in backend development, preferably in agile teams. - A strong problem-solving mindset and ability to take ownership. - Experience in modern, cloud-native tech stacks. - Collaborative attitude and clear communication skills. Why Join Us This is your chance to work on innovative, tech-first solutionsnot a conventional sector job. You'll contribute to building intelligent, scalable platforms with a direct user impact. - Work on deep-tech problems in asset discovery and digitization. - Build backend engines using latest cloud technologies. - Be part of a transparent, ethical, innovation-focused product company. Parent Ecosystem & Vision Youll be contributing to a fast-growing technology initiative backed by a mission-driven ecosystem focused on sustainability, rural development, and innovation. The broader group emphasizes responsible growth, digital transformation, and creating real-world impact through technology.
Posted 1 month ago
4.0 - 5.0 years
9 - 19 Lacs
Hyderabad
Work from Office
Hi All , We have immediate openings for Below Requirement Role : Hadoop Administration Skill : Hadoop Administrator(with EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, Neo4j, AWS) Experience : 4 to 9yrs Work location : Hyderabad Interview Mode : 1sr round virtual & 2nd round F2F Notice Period : 15 to immediate joiners only Interested candidates can share your cv to Mail : sravani.vommi@sonata-software.com Contact : 7075751998 JD FOR Hadoop Admin: Hadoop Administrator(with EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, Neo4j, AWS) Job Summary: We are seeking a highly skilled Hadoop Administrator with hands-on experience managing distributed data platforms such as Hadoop EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, and Neo4j. Key Responsibilities: Cluster Management: Administer, manage, and maintain Hadoop EMR clusters, ensuring optimal performance, high availability, and resource utilization. Handle the provisioning, configuration, and scaling of Hadoop clusters, with a focus on EMR, ensuring seamless integration with other ecosystem tools (e.g., Spark, Kafka, HBase). Oversee HBase configurations, performance tuning, and integration within the Hadoop ecosystem. Manage OpenSearch(formerly known as Elasticsearch) for log analytics and large-scale search applications. Data Integration & Processing: Oversee the performance and optimization of Apache Spark workloads across distributed data environments. Design and manage efficient data pipelines between Snowflake, Kafka, and the Hadoop ecosystem, ensuring seamless data movement and transformation. Implement data storage solutions in Snowflake and manage seamless data transfers to/from Hadoop(EMR) and other environments. Cloud & AWS Services: Work closely with AWS services such as EC2, S3,ECS, Lambda, IAM, RDS, and CloudWatch to build scalable, cost-efficient solutions for data management and processing. manage AWS EMR clusters, ensuring they are optimized for big data workloads and integrated with other AWS services. - Security & Compliance: Manage and configure Kerberos authentication and access control mechanisms within the Hadoop ecosystem (HDFS, YARN, Spark) to ensure data security. Implement encryption and secure data transfer policies within Hadoop clusters, Kafka, HBase, and OpenSearch to meet compliance and regulatory requirements. Manage user roles and permissions for access to Snowflake and ensure seamless integration of security policies across platforms. Monitoring & Troubleshooting: Set up and manage monitoring solutions to ensure the health of the Hadoop ecosystem and related components. Actively monitor and troubleshoot issues with Spark, Kafka, HBase, OpenSearch, and other distributed systems. Provide proactive support to address performance issues, bottlenecks, and failures. Automation & Optimization: Automate the deployment, scaling, and management of Hadoop and other big data systems using scripting languages (Bash, Python) . Optimize the configurations and performance of EMR, Spark, Kafka, HBase, OpenSearch. Develop scripts and utilities for backup, job monitoring, and performance tuning.
Posted 1 month ago
4.0 - 9.0 years
8 - 18 Lacs
Bengaluru
Work from Office
Job Title: Backend Developer (FARM Stack Python, FastAPI/Django, MongoDB) Location: South Bengaluru, Karnataka, India Employment Type: Full-Time Experience Required: 4 to 9 years Work Mode: Work from Office only (No Work from Home or Hybrid option) We are seeking a skilled and experienced Backend Developer to join a fast-paced, mission-driven technology team that is transforming how Indians discover and own properties. You will be part of a platform built using the FARM stack (FastAPI/Django, React, MongoDB), contributing to the design and development of scalable backend architectures, cloud-native systems, and microservices. You will work on high-impact solutions using AWS services, Redis for performance optimization, and OpenSearch for advanced data search. This role is ideal for developers passionate about backend engineering, clean architecture, and meaningful technology innovation. Responsibilities: Build and maintain backend services using Python, FastAPI or Django Architect and implement scalable systems using microservices Design and optimize MongoDB schemas and queries Work with AWS (Lambda, API Gateway, DynamoDB, S3, SQS, SNS) Integrate Redis for caching and session management Implement OpenSearch and vector search for advanced search use cases Write and maintain unit tests following TDD principles using tools like pytest Create and maintain Swagger documentation for APIs Use Git for version control and follow best practices for branching and collaboration Contribute to Agile ceremonies including sprint planning and retrospectives Required Skills: 4 to 9 years of backend development experience in Python Strong experience with FastAPI or Django frameworks Deep understanding of MongoDB and NoSQL schema design Experience building microservices and distributed systems Hands-on experience with AWS cloud and serverless architecture Familiarity with Redis, OpenSearch, and vector-based search Proficiency in unit testing, Git workflows, Swagger, and CI/CD pipelines Experience working in Agile teams Preferred Skills: Docker and Kubernetes CI/CD tools like Jenkins, GitLab CI, or CircleCI Knowledge of API security best practices Experience working with large datasets and high availability systems Please Note: While initial HR and technical rounds will be conducted online, attending a final offline (in-person) interview in Bengaluru is mandatory for shortlisted candidates. The client will not confirm selection or issue an offer letter without this in-person interaction. If you are certain you cannot travel to Bengaluru for the offline interview, we kindly request you not to proceed with the online rounds, as the selection process cannot be completed without a face-to-face meeting.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Help empower our global customers to connect to culture through their passions. Why you'll love this role StockX is an established global startup headquartered in the USA with development offices in Bangalore India. In the Search & Recommendation team, we work together to productionalize custom machine-learning models that can drive product vision and customer impact at scale. We are looking for MLE who are product driven, and are passionate about making ML innovations in areas such as Ranking, Optimization, Natural Language Processing, Information Retrieval, Graph Learning, Reinforcement Learning to help improve the StockX buyer/seller experience! What you'll do Develop embeddings to collect salient signals of our customers, product, and user interactions. Extract real-time signals and multi-modality data (i.e, content and image) from our 5M+ product catalog images and 1M+ listings. Understand semantic content, aesthetic style, materials for retrieval, ranking and optimization. Build a real-time, in-session personalization recommendation system. Implement and compare supervised learning models (i.e, LR, GBDT, and DNNs) or ensembles of models, to improve metrics, often with multiple contending objectives (i.e, relevance, degree of personalization, average value of orders, repeated frequencies/purchases). Develop models with custom architecture or objective functions that target StockX-specific problems, such as recommendation system, personalized search, revenue optimization, seller fairness, seasonality, etc. Develop brand-new learning frameworks for query suggestions to understand buyer experience. Apply the latest advances in deep learning and machine learning to improve buyer and seller experiences on StockX. Prototype, optimize, and productionize large-scale ML models that help deliver key results in search experience. Conduct A/B experiments to validate ML models and pipelines. Work closely with product managers, Data scientists/engineers, full-stack engineers, and designers on product teams to deliver content to tens of millions of users. About you Experience with object-oriented or functional software development Experience working with AWS or other cloud providers Experience with big data platforms like Spark or Databricks Experience with machine learning libraries such as TensorFlow, PyTorch, or MXNet You have dealt with data exploration, analysis, and feature engineering You have relentlessly high standards for the products you deliver Work effectively in an agile development process You have a postgraduate degree in Computer Science or related engineering fields plus 3+ machine learning experience, or 5+ years of practical machine learning experience. Experience with Kubernetes and Docker for productionalizing models You have experience in building machine learning systems at scale. You have experience in using AWS Cloud Platform, Databricks and/or OpenSearch. You have experience in building production search, recommendations, advertising, or general e-commerce systems. About StockX StockX is proud to be a Detroit-based technology leader focused on the large and growing online market for sneakers, apparel, accessories, electronics, collectibles, trading cards, and more. StockX's powerful platform connects buyers and sellers of high-demand consumer goods from around the world using dynamic pricing mechanics. This approach affords access and market visibility powered by real-time data that empowers buyers and sellers to determine and transact based on market value. The StockX platform features hundreds of brands across verticals including Jordan Brand, adidas, Nike, Supreme, BAPE, Off-White, Louis Vuitton, Gucci collectibles from brands including LEGO, KAWS, Bearbrick, and Pop Mart and electronics from industry-leading manufacturers Sony, Microsoft, Meta, and Apple. Launched in 2016, StockX employs 1,000 people across offices and verification centers around the world. Learn more at www.stockx.com. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. However, this job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities or working conditions associated with the position. StockX reserves the right to amend this job description at any time. StockX may utilize AI to rank job applicant submissions against the position requirements to assist in determining candidate alignment.
Posted 1 month ago
2.0 - 7.0 years
5 - 12 Lacs
Bengaluru
Work from Office
Job Description: Backend Developer Location: South Bengaluru, Karnataka, India Experience: 2 to 6 years Salary Range: 5 12 LPA Working Days: Monday to Friday (5-day work week) Work Mode: Work from Office only (No hybrid or remote option) About the Opportunity Join a product-driven tech team building a scalable, intelligent platform that simplifies high-value ownership journeys. This role is ideal for developers who want to work on cloud-native architectures, solve meaningful backend challenges, and drive real-world user impact. Youll be developing and optimizing backend systems using modern stacks and AWS services within a fast-paced, agile environment. Key Responsibilities - Design, develop, and deploy cloud-native applications on AWS using Lambda, AppSync, EventBridge, DynamoDB, and OpenSearch. - Build robust RESTful APIs and ensure smooth integration with internal and external systems. - Migrate and modernize existing full-stack applications to a cloud-native setup. - Develop full-stack solutions using TypeScript, JavaScript, Node.js, and Python. - Monitor and optimize database health using MongoDB and PostgreSQL. - Maintain CI/CD pipelines using AWS CodePipeline and related tools. - Ensure scalability, security, and performance of backend systems in a dynamic, fast-moving environment. Technical Skills Required - Strong experience with Python, FastAPI, Node.js, and TypeScript. - Proficiency with MongoDB, PostgreSQL, and familiarity with DynamoDB. - Solid knowledge of AWS services including Lambda and serverless architecture. - Good grasp of RESTful API design and cloud-native principles. - Exposure to front-end tech like Angular or JavaScript is a plus. - Experience working with CI/CD tools and Agile methodologies (Scrum). What You Bring - 26 years of experience in backend development, preferably in agile teams. - A strong problem-solving mindset and ability to take ownership. - Experience in modern, cloud-native tech stacks. - Collaborative attitude and clear communication skills. Why Join Us This is your chance to work on innovative, tech-first solutionsnot a conventional sector job. You'll contribute to building intelligent, scalable platforms with a direct user impact. - Work on deep-tech problems in asset discovery and digitization. - Build backend engines using latest cloud technologies. - Be part of a transparent, ethical, innovation-focused product company. Parent Ecosystem & Vision Youll be contributing to a fast-growing technology initiative backed by a mission-driven ecosystem focused on sustainability, rural development, and innovation. The broader group emphasizes responsible growth, digital transformation, and creating real-world impact through technology.
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Mumbai
Work from Office
The Senior Spark Tech Lead will be responsible for integrating and maintaining the Quantexa platform, a spark based software provided by a UK fintech, into our existing systems to enhance our anti-money laundering capabilities. This role requires a deep expertise in Spark development, as well as an ability to analyze and understand underlying data. Additionally, the candidate should have an interest in exploring open-source applications distributed by Apache, Kubernetes, OpenSearch and Oracle. Should be able to work as a Scrum Master Responsibilities Direct Responsibilities Integrate and upgrade the Quantexa tool with our existing systems for enhanced anti-money laundering measures. Develop and maintain Spark-based applications deployed on Kubernetes clusters. Conduct data analysis to understand and interpret underlying data structures. Collaborate with cross-functional teams to ensure seamless integration and functionality. Stay updated with the latest trends and best practices in Spark development and Kubernetes. Contributing Responsibilities Taking complete ownership of project activities and understand each tasks in details. Ensure that the team delivers on time without any delays and deliveries are of high quality standards. Estimation, Planning and scheduling of the project. Ensure all internal timelines are respected and project is on track. Work with team to develop robust software adhering to the timelines & following all the standard guidelines. Act proactively to ensure smooth team operations and effective collaboration Make sure team adheres to all compliance processes and intervene if required Task assignment to the team and tracking until task completion Proactive Status reporting to the management. Identify Risks in the project and highlight to Manager. Create Contingency and Backup planning as necessary. Create Mitigation Plan. Take decision by own based on situation. Play the role of mentor and coach team members as and when required to meet the target goals Gain functional knowledge on applications worked upon Create knowledge repositories for future reference. Arrange knowledge sharing sessions to enhance team's functional capability. Evaluation of new tools and coming with POCs. Provide feedback of team to upper management on timely basis Technical & Behavioral Competencies Key Responsibilities Integrate and upgrade the Quantexa tool with our existing systems for enhanced anti-money laundering measures. Develop and maintain Spark-based applications deployed on Kubernetes clusters. Conduct data analysis to understand and interpret underlying data structures. Collaborate with cross-functional teams to ensure seamless integration and functionality. Stay updated with the latest trends and best practices in Spark development and Kubernetes. Required Qualifications 7+ Years of experience in development Extensive experience in Hadoop, Spark, Scala development (5 years min). Strong analytical skills and experience in data analysis (SQL), data processing (such as ETL), parsing, data mapping and handling real-life data quality issues. Excellent problem-solving abilities and attention to detail. Strong communication and collaboration skills. Experience in Agile development. High quality coding skill, incl. code control, unit testing, design, and documentation (code, test). Experience with tools such as sonar. Experience with GIT, Jenkins. Specific Qualifications (if required) Experience with development and deployment of spark application and deployment on Kubernetes clusters Hands-on development experience (Java, Scala, etc.) via system integration projects, Python, Elastic (optional). Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to collaborate / Teamwork Adaptability Creativity & Innovation / Problem solving Attention to detail / rigor Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to develop and adapt a process Ability to develop and leverage networks Choose an item. Choose an item. Education Level: Bachelor Degree or equivalent Experience Level At least 7 years Fluent in English Team player Strong analytical skills Quality oriented and well organized Willing to work under pressure and mission oriented Excellent Oral and Written Communication Skills, Motivational Skills, Results-Oriented
Posted 1 month ago
7.0 - 12.0 years
10 - 20 Lacs
Gurugram, Chennai, Bengaluru
Hybrid
Role & responsibilities Strong knowledge in Camunda Architecture 8.5+ (Zeebe engine, Identity (with KeyCloak), Optimise, Elasticsearch/Opensearch) Should have done Camunda 8.5+ installation Should have extensive hands on experience in BPM Process designing and Java client application development Should have debugging and testing skills Should have experience in Springboot MS, AWS services and OpenSearch Good understanding of agile development methodology, agile tools like Jira, confluence and experience on UCD, CICD, Jenkins and GITLAB. Proficient in written and verbal communication; strong analytical and problem-solving abilities.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Requisition ID # 25WD89404 Position overview As a Senior ML Engineer on the team, you will be responsible for leading/contributing to the design, analysis, and delivery of data-driven solutions to significant business challenges across dedicated RAG programs of work. Your expertise in Information Retrieval as a theoretical discipline and in OpenSearch/ElasticSearch as the enabling tool will drive end-to-end planning, POC-building, offline and online evaluation and features into the hands of users. The Senior MLE will work to deliver ML-driven technical solutions to customer needs in our platform. Sr MLEs help in coordinating delivery as part of multi-disciplinary stakeholder teams. T his role is hands-on, focusing on t echnical problem solving with the latest in NLP and ML technology. Our team culture is built on collaboration, mutual support, and continuous learning. As a Senior MLE, you will provide technical guidance and expertise for the team, and with other stakeholders. We emphasize an agile, hands-on, and technical approach at all levels of the team. As a group, we want to continuously improve our Machine Learning as well as our knowledge of trends and techniques relevant to our areas. Our team strives for excellence in the theory and practice of Machine Learning. We encourage personal development and knowledge sharing. Responsibilities Design and implement Machine Learning capabilities that improve Autodesk's RAG platforms Perform statistical and data analysis and exploration to generate datasets for model training and development Collaborate with other members of the team to reach better solutions, and to position our team at the cutting edge of technology and ML practice Provide technical leadership and mentorship for less-experienced members of the team to deliver key ML-powered features for our Digital Customer Platform. Partner with stakeholders, to solve important business objectives across a range of problems Translate business requirements and objectives into problems that can be solved with a combination of Data, Statistics, and Machine Learning Minimum Qualifications MS/M.Tech/M.Math/ or PhD in Computer Science, Statistics, Mathematics, Physics, Engineering, Economics, Computational Linguistics or related field. 3+ years of applicable work experience in ML Hands-on experience working with OpenSearch/ElasticSearch/Lucene/Solr on large text data Hands-on experience of NLU and integrating traditional search techniques into RAG Proficiency with the Python Machine Learning stack, e.g. Pandas, etc Knowledge of experimental design and analysis of results . Demonstrate expertise with applying Machine Learning, including both Deep Learning (PyTorch) and Classical ML (Scikit-Learn) Demonstrate experience with leading Machine Learning teams in deploying and improving ML features in production Demonstrate experience working in cross-functional teams to deliver ML solutions in production #LI-RV1 Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk - our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you're an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future Join us! Salary transparency Salary is one part of Autodesk's competitive compensation package. Offers are based on the candidate's experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: Are you an existing contractor or consultant with Autodesk Please search for open jobs and apply internally (not on this external site).
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Are you ooking to sove hard probems and enjoy working with teammates with diverse perspectivesIf so, we woud ove to hep you exce here at Kensho. About The Team Kenshos Appications group deveops the web apps and APIs that deiver Kenshos AI capabiities to our customers. Our teams are sma, product-focused, and intent on shipping high-quaity code that best everages our efforts. Were coegia, humbe, and inquisitive, and we deight in earning from teammates with backgrounds, skis, and interests different from our own. Kensho Link team, within the Appications Department, is a machine earning service that aows users to map entities in their datasets with unique entities drawn from S&P Gobas word-cass company database with precision and speed. Link started as an interna Kensho project to hep S&P Goba Market Inteigence Team to integrate datasets more quicky into their patform. It uses ML based agorithms trained to return high quaity inks, even when the data inputs are incompete or contain errors. In simpe words, Kenshos Link product heps in connecting the disconnected information about a company at one pace and it does so with scae. Link everages a variety of NLP and ML techniques to process and ink miions of company entities in hours. About The Roe As a Senior Backend Engineer you wi deveop reiabe, secure, and performant APIs that appy Kenshos AI capabiities to specific customer workfows. You wi coaborate with coeagues from Product, Machine Learning, Infrastructure, and Design, as we as with other engineers within Appications. You have a demonstrated capacity for depth, and are comfortabe working with a broad range of technoogies. Your verba and written communication is proactive, efficient, and incusive of your geographicay-distributed coeagues. You are a thoughtfu, deiberate technoogist and share your knowedge generousy. Equivaent to Grade 11 Roe (Interna) You wi Design, deveop, test, document, depoy, maintain, and improve software Manage individua project priorities, deadines, and deiverabes Work with key stakehoders to deveop system architectures, API specifications, impementation requirements, and compexity estimates Test assumptions through instrumentation and prototyping Promote ongoing technica deveopment through code reviews, knowedge sharing, and mentorship Optimize Appication ScaingEfficienty scae ML appications to maximize compute resource utiization and meet high customer demand. Address Technica DebtProactivey identify and propose soutions to reduce technica debt within the tech stack. Enhance User ExperiencesCoaborate with Product and Design teams to deveop ML-based soutions that enhance user experiences and aign with business goas. Ensure API security and data privacy by impementing best practices and compiance measures. Monitor and anayze API performance and reiabiity, making data-driven decisions to improve system heath. Contribute to architectura discussions and decisions, ensuring scaabiity, maintainabiity, and performance of the backend systems. Quaifications At east 5+ years of direct experience deveoping customer-facing APIs within a team Thoughtfu and efficient communication skis (both verba and written) Experience deveoping RESTfu APIs using a variety of toos Experience turning abstract business requirements into concrete technica pans Experience working across many stages of the software deveopment ifecyce Sound reasoning about the behavior and performance of oosey-couped systems Proficiency with agorithms (incuding time and space compexity anaysis), data structures, and software architecture At east one domain of demonstrabe technica depth Famiiarity with CI/CD practices and toos to streamine depoyment processes. Experience with containerization technoogies (e.g., Docker, Kubernetes) for appication depoyment and orchestration. Technoogies We Love Python, Django, FastAPI mypy, OpenAPI RabbitMQ, Ceery, Kafka OpenSearch, PostgreSQL, Redis Git, Jsonnet, Jenkins, Docker, Kubernetes Airfow, AWS, Terraform Grafana, Prometheus ML LibrariesPyTorch, Scikit-earn, Pandas Whats In It For You Our Purpose: Progress is not a sef-starter. It requires a catayst to be set in motion. Information, imagination, peope, technoogythe right combination can unock possibiity and change the word.Our word is in transition and getting more compex by the day. We push past expected observations and seek out new eves of understanding so that we can hep companies, governments and individuas make an impact on tomorrow. At S&P Goba we transform data into Essentia Inteigence, pinpointing risks and opening possibiities. We Acceerate Progress. Our Peope: Our Vaues: Integrity, Discovery, Partnership At S&P Goba, we focus on Powering Goba Markets. Throughout our history, the word's eading organizations have reied on us for the Essentia Inteigence they need to make confident decisions about the road ahead. We start with a foundation of integrity in a we do, bring a spirit of discovery to our work, and coaborate in cose partnership with each other and our customers to achieve shared goas. Benefits: We take care of you, so you cantake care of business. We care about our peope. Thats why we provide everything youand your careerneed to thrive at S&P Goba. Heath & WenessHeath care coverage designed for the mind and body. Continuous LearningAccess a weath of resources to grow your career and earn vauabe new skis. Invest in Your FutureSecure your financia future through competitive pay, retirement panning, a continuing education program with a company-matched student oan contribution, and financia weness programs. Famiy Friendy PerksIts not just about you. S&P Goba has perks for your partners and itte ones, too, with some best-in cass benefits for famiies. Beyond the BasicsFrom retai discounts to referra incentive awardssma perks can make a big difference. For more information on benefits by country visit Incusive Hiring and Opportunity at S&P Goba: At S&P Goba, we are committed to fostering an incusive workpace where a individuas have access to opportunities based on their skis, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equa opportunity, ensuring that we attract and retain top taent. By vauing different perspectives and promoting a cuture of respect and coaboration, we drive innovation and power goba markets. ----------------------------------------------------------- Equa Opportunity Empoyer S&P Goba is an equa opportunity empoyer and a quaified candidates wi receive consideration for empoyment without regard to race/ethnicity, coor, reigion, sex, sexua orientation, gender identity, nationa origin, age, disabiity, marita status, miitary veteran status, unempoyment status, or any other status protected by aw. Ony eectronic job submissions wi be considered for empoyment. If you need an accommodation during the appication process due to a disabiity, pease send an emai to and your request wi be forwarded to the appropriate person. US Candidates Ony The EEO is the Law Poster describes discrimination protections under federa aw. Pay Transparency Nondiscrimination Provision - -----------------------------------------------------------
Posted 1 month ago
6.0 - 9.0 years
14 - 22 Lacs
Pune, Chennai
Work from Office
Hiring For Top IT Company- Designation: Python Developer Skills: AWS SDK +AI services integration Location :Pune/Chennai Exp: 6-8 yrs Best CTC Surbhi:9887580624 Anchal:9772061749 Gitika:8696868124 Shivani:7375861257 Team Converse
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
The Opportunity "We are seeking a senior software engineer to undertake a range of feature development tasks that continue the evolution of our DMP Streaming product. You will demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading edge technologies and integration frameworks. Given your depth of experience, we also want you to technically guide more junior members of the team, instilling both good engineering practices and inspiring them to grow" Software Quality Assurance Director What You'll Contribute Implement product changes, undertaking detailed design, programming, unit testing and deployment as required by our SDLC process Investigate and resolve reported software defects across supported platforms Work in conjunction with product management to understand business requirements and convert them into effective software designs that will enhance the current product offering Produce component specifications and prototypes as necessary Provide realistic and achievable project estimates for the creation and development of solutions. This information will form part of a larger release delivery plan Develop and test software components of varying size and complexity Design and execute unit, link and integration test plans, and document test results. Create test data and environments as necessary to support the required level of validation Work closely with the quality assurance team and assist with integration testing, system testing, acceptance testing, and implementation Produce relevant system documentation Participate in peer review sessions to ensure ongoing quality of deliverables. Validate other team members' software changes, test plans and results Maintain and develop industry knowledge, skills and competencies in software development What We're Seeking A Bachelors or Masters degree in Computer Science, Engineering, or related field Java software development experience within an industry setting Ability to work in both Windows and UNIX/Linux operating systems Detailed understanding of software and testing methods Strong foundation and grasp of design models and database structures Proficient in Kubernetes, Docker, and Kustomize Exposure to the following technologies: Apache Storm, MySQL or Oracle, Kafka, Cassandra, OpenSearch, and API (REST) development Familiarity with Eclipse, Subversion and Maven Ability to lead and manage others independently on major feature changes Excellent communication skills with the ability to articulate information clearly with architects, and discuss strategy/requirements with team members and the product manager Quality-driven work ethic with meticulous attention to detail Ability to function effectively in a geographically-diverse team Ability to work within a hybrid Agile methodology Understand the design and development approaches required to build a scalable infrastructure/platform for large amounts of data ingestion, aggregation, integration and advanced analytics Experience of developing and deploying applications into AWS or a private cloud Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, Angular, UI Development
Posted 1 month ago
4.0 - 9.0 years
4 - 9 Lacs
Navi Mumbai, Maharashtra, India
On-site
Role Responsibilities: Build and maintain search and data pipelines using AWS and Python Optimize and manage cloud-based applications and integrations Collaborate with frontend and DevOps teams in Agile setup Troubleshoot and enhance application performance and search accuracy Key Deliverables: Scalable AWS-based search solutions Efficient Python scripts for data handling and indexing Integrated CI/CD pipelines for deployments Cloud migration and optimization support
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough