Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineer We are looking for an experienced Data Engineer with strong expertise in Snowflake, dbt, Airflow, AWS, and modern data technologies like Python, Apache Spark, and NoSQL databases. The role focuses on designing, building, and optimizing data pipelines to support analytical and regulatory needs in the banking domain. Key Responsibilities Design and implement scalable and secure data pipelines using Airflow, dbt, Snowflake, and AWS services. Develop data transformation workflows and modular SQL logic using dbt for a centralized data warehouse in Snowflake. Build batch and near real-time data processing solutions using Apache Spark and Python. Work with structured and unstructured banking datasets stored across S3, NoSQL (e.g., MongoDB, DynamoDB), and relational databases. Ensure data quality, lineage, and observability through logging, testing, and monitoring tools. Support data needs for compliance, regulatory reporting, risk, fraud, and customer analytics. Ensure secure handling of sensitive data aligned with banking compliance standards (e.g., PII masking, role-based access). Collaborate closely with business users, data analysts, and data scientists to deliver production-grade datasets. Implement best practices for code versioning, CI/CD, and environment management Required Skills And Qualifications 5-8 years of experience in data engineering, preferably in banking, fintech, or regulated industries. Hands-on experience with: Snowflake (data modeling, performance tuning, security) dbt (modular SQL transformation, documentation, testing) Airflow (orchestration, DAGs) AWS (S3, Glue, Lambda, Redshift, IAM) Python (ETL scripting, data manipulation) Apache Spark (batch/stream processing using PySpark or Scala) NoSQL databases (e.g., DynamoDB, MongoDB, Cassandra) Strong SQL skills and experience in performance optimization and cost-efficient query design. Exposure to data governance, compliance, and security in the banking industry. Experience working with large-scale datasets and complex data transformations. Familiarity with version control (e.g., Git) and CI/CD pipelines. Preferred Qualifications Prior experience in banking/financial services Knowledge of Kafka or other streaming platforms. Exposure to data quality tools (e.g., Great Expectations, Soda). Certifications in Snowflake, AWS, or dbt. Strong communication skills and ability to work with cross-functional teams. About Convera Convera is the largest non-bank B2B cross-border payments company in the world. Formerly Western Union Business Solutions, we leverage decades of industry expertise and technology-led payment solutions to deliver smarter money movements to our customers – helping them capture more value with every transaction. Convera serves more than 30,000 customers ranging from small business owners to enterprise treasurers to educational institutions to financial institutions to law firms to NGOs. Our teams care deeply about the value we bring to our customers which makes Convera a rewarding place to work. This is an exciting time for our organization as we build our team with growth-minded, result-oriented people who are looking to move fast in an innovative environment. As a truly global company with employees in over 20 countries, we are passionate about diversity; we seek and celebrate people from different backgrounds, lifestyles, and unique points of view. We want to work with the best people and ensure we foster a culture of inclusion and belonging. We offer an abundance of competitive perks and benefits including: Competitive salary Opportunity to earn an annual bonus. Great career growth and development opportunities in a global organization A flexible approach to work There are plenty of amazing opportunities at Convera for talented, creative problem solvers who never settle for good enough and are looking to transform Business to Business payments. Apply now if you’re ready to unleash your potential.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Network Architect (Network Traffic Intelligence & Flow Data Systems) Location : Pune, India (with Travel to Onsite) Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years of hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Key Responsibilities : Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Required Skills & Qualifications : Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills. Certifications in Cloudera, Kafka, or cloud platforms (e.g., AWS Architect, GCP Data Engineer) will be advantageous. A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer Consultant/Expert 34326 Location: Chennai Work Type: Contract (Onsite) Compensation: Up to ₹21–24 LPA (Based on experience) Notice Period: Immediate joiners preferred Experience: Minimum 7+ years (9 preferred) Position Summary Seeking a skilled and motivated Full Stack Java Developer to join a growing software engineering team responsible for building and supporting a global logistics data warehouse platform. This platform provides end-to-end visibility into vehicle shipments using GCP cloud technologies , microservices architecture , and real-time data processing pipelines . Key Responsibilities Design, develop, and maintain robust backend systems using Java, Spring Boot, and microservices architecture Implement and optimize REST APIs, and integrate with Pub/Sub, Kafka, and other event-driven systems Build and maintain scalable data processing workflows using GCP BigQuery, Cloud Run, and Terraform Collaborate with product managers, architects, and fellow engineers to deliver impactful features Perform unit testing, integration testing, and support functional and user acceptance testing Conduct code reviews and provide mentorship to other engineers to improve code quality and standards Monitor system performance and implement strategies for optimization and scalability Develop and maintain ETL/data pipelines to transform and manage logistics data Continuously refactor and enhance existing code for maintainability and performance Required Skills Strong hands-on experience with Java, Spring Boot, and full stack development Proficiency with GCP Cloud Platform, including at least 1 year of experience with BigQuery Experience with GCP Cloud Run, Terraform, and deploying containerized services Deep understanding of REST APIs, microservices, Pub/Sub, Kafka, and cloud-native architectures Experience in ETL development, data engineering, or data warehouse projects Exposure to AI/ML integration in enterprise applications is a plus Preferred Skills Familiarity with AI agents and modern AI-driven data products Experience working with global logistics, supply chain, or transportation domains Education Requirements Required: Bachelor’s degree in Computer Science, Information Technology, or related field Preferred: Advanced degree or specialized certifications in cloud or data engineering Work Environment Location: Chennai (Onsite required) Work closely with cross-functional product teams in an Agile setup Fast-paced, data-driven environment requiring strong communication and problem-solving skills Skills: rest apis,cloud run,bigquery,gcp,pub/sub,data,data engineering,kafka,microservices,terraform,cloud,spring boot,data warehouse,java,code,etl development,full stack development,gcp cloud platform
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Software Engineer Consultant/Expert – GCP Data Engineer 34350 Location: Chennai Engagement Type: Contract Compensation: Up to ₹18 LPA Notice Period: Immediate joiners preferred Work Mode: Onsite Role Overview This role is for a proactive Google Cloud Platform (GCP) Data Engineer who will contribute to the modernization of a cloud-based enterprise data warehouse. The ideal candidate will focus on integrating diverse data sources to support advanced analytics and AI/ML-driven solutions, as well as designing scalable pipelines and data products for real-time and batch processing. This opportunity is ideal for individuals who bring both architectural thinking and hands-on experience with GCP services, big data processing, and modern DevOps practices. Key Responsibilities Design and implement scalable, cloud-native data pipelines and solutions using GCP technologies Develop ETL/ELT processes to ingest and transform data from legacy and modern platforms Collaborate with analytics, AI/ML, and product teams to enable data accessibility and usability Analyze large datasets and perform impact assessments across various functional areas Build data products (data marts, APIs, views) that power analytical and operational platforms Integrate batch and real-time data using tools like Pub/Sub, Kafka, Dataflow, and Cloud Composer Operationalize deployments using CI/CD pipelines and infrastructure as code Ensure performance tuning, optimization, and scalability of data platforms Contribute to best practices in cloud data security, governance, and compliance Provide mentorship, guidance, and knowledge-sharing within cross-functional teams Mandatory Skills GCP expertise with hands-on use of services including: BigQuery, Dataflow, Data Fusion, Dataform, Dataproc Cloud Composer (Airflow), Cloud SQL, Compute Engine Cloud Functions, Cloud Run, Cloud Build, App Engine Strong knowledge of SQL, data modeling, and data architecture Minimum 5+ years of experience in SQL and ETL development At least 3 years of experience in GCP cloud environments Experience with Python, Java, or Apache Beam Proficiency in Terraform, Docker, Tekton, and GitHub Familiarity with Apache Kafka, Pub/Sub, and microservices architecture Understanding of AI/ML integration, data science concepts, and production datasets Preferred Experience Hands-on expertise in container orchestration (e.g., Kubernetes) Experience working in regulated environments (e.g., finance, insurance) Knowledge of DevOps pipelines, CI/CD, and infrastructure automation Background in coaching or mentoring junior data engineers Experience with data governance, compliance, and security best practices in the cloud Use of project management tools such as JIRA Proven ability to work independently in fast-paced or ambiguous environments Strong communication and collaboration skills to interact with cross-functional teams Education Requirements Required: Bachelor's degree in Computer Science, Information Systems, Engineering, or related field Preferred: Master's degree or relevant industry certifications (e.g., GCP Data Engineer Certification) Skills: bigquery,cloud sql,ml,apache beam,app engine,gcp,dataflow,microservices architecture,cloud functions,compute engine,project management tools,data science concepts,security best practices,pub/sub,ci/cd,compliance,cloud run,java,cloud build,jira,data,pipelines,dataproc,sql,tekton,python,github,data modeling,cloud composer,terraform,data fusion,cloud,data architecture,apache kafka,ai/ml integration,docker,data governance,infrastructure automation,dataform
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
mthree is seeking a Java Developer to join a highly regarded Multinational Investment Bank and Financial Services Company. Job Description: Role: Java Developer Team: Payment Gateway Location: Pune (Hybrid model with 2-3 days per week in the office) Key Responsibility Develop and Maintain Applications: Design, develop, and maintain server-side applications using Java 8 to ensure high performance and responsiveness to requests from the front-end. • Scalability Solutions: Architect and implement scalable solutions for client risk management, ensuring the system can handle large volumes of transactions and data. • Data Streaming and Caching: Utilize Kafka or Redis for efficient data streaming and caching, ensuring real-time data processing and low-latency access. • Multithreading and Synchronization: Implement multithreading and synchronization techniques to enhance application performance and ensure thread safety. • Microservices Development: Develop and deploy microservices using Spring Boot, ensuring modularity and ease of maintenance. • Design Patterns: Apply design patterns to solve complex software design problems, ensuring code reusability and maintainability. • Linux Optimization: Ensure applications are optimized for Linux environments, including performance tuning and troubleshooting. • Collaboration: Collaborate with cross-functional teams, including front-end developers, QA engineers, and product managers, to define, design, and ship new features. • Troubleshooting: Troubleshoot and resolve production issues, ensuring minimal downtime and optimal performance. Requirements: • Educational Background: Bachelor’s degree in computer science, Engineering, or a related field. • Programming Expertise: Proven experience (c2-5 years) in Java 8+ programming, with a strong understanding of object-oriented principles and design. • Data Technologies: Understanding of Kafka or Redis (or similar Cache), including setup, configuration, and optimization. • Concurrency: Experience with multithreading and synchronization, ensuring efficient and safe execution of concurrent processes. • Frameworks: Proficiency in Spring Boot, including developing RESTful APIs and integrating with other services. • Design Patterns: Familiarity with design patterns and their application in solving software design problems. • Operating Systems: Solid understanding of Linux operating systems, including shell scripting and system administration. • Problem-Solving: Excellent problem-solving skills and attention to detail, with the ability to debug and optimize code. • Communication: Strong communication and teamwork skills, with the ability to work effectively in a collaborative environment. Preferred Qualifications: • Industry Experience: Experience in the financial services industry is a plus. • Additional Skills: Knowledge of other programming languages and technologies, such as Python or Scala. • DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). Java Developer
Posted 1 week ago
5.0 years
0 Lacs
Delhi, India
On-site
About Cisive Cisive is a trusted partner for comprehensive, high-risk compliance-driven background screening and workforce monitoring solutions, specializing in highly regulated industries—such as healthcare, financial services, and transportation. We catch what others miss, and we are dedicated to helping our clients effortlessly secure the right talent. As a global leader, Cisive empowers organizations to hire with confidence. Through our PreCheck division, Cisive provides specialized background screening and credentialing solutions tailored for healthcare organizations, ensuring patient and workforce safety. Driver iQ, our transportation-focused division, delivers FMCSA-compliant screening and monitoring solutions that help carriers hire and retain the safest drivers on the road. Unlike traditional background screening providers, Cisive takes a technology-first approach powered by advanced automation, human expertise, and compliance intelligence—all delivered through a scalable platform. Our solutions include continuous workforce monitoring, identity verification, criminal record screening, license monitoring, drug & health screening, and global background checks. Job Summary The Senior Software Developer is responsible for designing and delivering complex, scalable software systems, leading technical initiatives, and mentoring junior developers. This role plays a key part in driving high-impact projects and ensuring the delivery of robust, maintainable solutions. In addition to core development duties, the role works closely with the business to identify opportunities for automation and web scraping to improve operational efficiency. The Senior Software Developer will collaborate with Cisive’s Software Development team and client stakeholders to support, analyze, mine, and report on IT and business data—focusing on optimizing data handling for web scraping processes. This individual will manage and consult on data flowing into and out of Cisive systems, ensuring data integrity, performance, and compliance with operational standards. The role is critical to achieving service excellence and automation across Cisive’s diverse product offerings and will continuously strive to enhance process efficiency and data flow across platforms. Duties And Responsibilities Lead the design, architecture, and implementation of scalable and maintainable web scraping solutions using the Scrapy framework, integrated with tools such as Kafka, Zookeeper, and Redis Develop and maintain web crawlers to automate data extraction from various sources, ensuring alignment with user and application requirements Research, design, and implement automation strategies across multiple platforms, tools, and technologies to optimize business processes Monitor, troubleshoot, and resolve issues affecting the performance, reliability, and stability of scraping systems and automation tools Serve as a Subject Matter Expert (SME) for automation systems, providing guidance and support to internal teams Analyze and validate extracted data to ensure accuracy, integrity, and compliance with Cisive’s data standards Define, implement, and enforce data requirements, standards, and best practices to ensure consistent and efficient operations Collaborate with stakeholders and end users to define technical requirements, business goals, and alternative solutions for data collection and reporting Create, manage, and document reports, processes, policies, and project plans, including risk assessments and goal tracking Conduct code reviews, enforce coding standards, and provide technical leadership and mentorship to development team members Proactively identify and mitigate technical risks, recommending improvements in technologies, tools, and processes Drive the adoption of modern development tools, frameworks, and best practices Contribute to strategic planning related to automation initiatives and product development Ensure clear, thorough communication and documentation across teams to support knowledge sharing and training Minimum Qualifications Bachelor’s degree in Computer Science, Software Engineering, or related field. 5+ years of professional software development experience. Strong proficiency in HTML, XML, XPath, XSLT, and Regular Expressions for data extraction and transformation Hands-on experience with Visual Studio Strong proficiency in Python Some experience with C# .NET Solid experience with MS SQL Server, with strong skills in SQL querying and data analysis Experience with web scraping, particularly using the Scrapy framework integrated with Kafka, Zookeeper, and Redis Experience with .NET automation tools such as Selenium Understanding of CAPTCHA-solving services and working with proxy services Experience working in a Linux environment is a plus Highly self-motivated and detail-oriented, with a proactive, goal-driven mindset Strong team player with dependable work habits and well-developed interpersonal skills Excellent verbal and written communication skills Demonstrates willingness and flexibility to adapt schedule when necessary to meet client needs.
Posted 1 week ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Responsibility Data Handling and Processing: •Proficient in SQL Server and query optimization. •Expertise in application data design and process management. •Extensive knowledge of data modelling. •Hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Microsoft Fabric. •Experience working with Azure Databricks. •Expertise in data warehouse development, including experience with SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services). •Proficiency in ETL processes (data extraction, transformation, and loading), including data cleaning and normalization. •Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) for large-scale data processing. •Understanding of data governance, compliance, and security measures within Azure environments. Data Analysis and Visualization: •Experience in data analysis, statistical modelling, and machine learning techniques. •Proficiency in analytical tools like Python, R, and libraries such as Pandas, NumPy for data analysis and modelling. •Strong expertise in Power BI for data visualization, data modelling, and DAX queries, with knowledge of best practices. •Experience in implementing Row-Level Security in Power BI. •Ability to work with medium-complex data models and quickly understand application data design and processes. •Familiar with industry best practices for Power BI and experienced in performance optimization of existing implementations. •Understanding of machine learning algorithms, including supervised, unsupervised, and deep learning techniques. Non-Technical Skills: •Ability to lead a team of 4-5 developers and take ownership of deliverables. •Demonstrates a commitment to continuous learning, particularly with new technologies. •Strong communication skills in English, both written and verbal. •Able to effectively interact with customers during project implementation. •Capable of explaining complex technical concepts to non-technical stakeholders. Data Management: SQL, Azure Synapse Analytics, Azure Analysis Service and Data Marts, Microsoft Fabric ETL Tools: Azure Data Factory, Azure Data Bricks, Python, SSIS Data Visualization: Power BI, DAX
Posted 1 week ago
6.0 years
0 Lacs
Delhi, India
On-site
About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask. Implement gRPC services, event-driven systems (Kafka, PubSub), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development: data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow, Dagster, SageMaker, and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation, A/B testing, and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python, FastAPI/Flask, gRPC, and event-driven architectures. Experience with CI/CD, infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices: feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration. Familiarity with tools like Airflow/Dagster, SageMaker, and data pipeline architecture.
Posted 1 week ago
15.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
#HiringAlert Job Role: Principal Java Engineer Location: Hybrid – Kolkata (3–4 days per week in office) Industry: Gaming / Real-time Systems / Technology Employment Type: Full-Time Salary: Based on experience and aligned with industry standards. About the Role We’re a fast-growing start-up in the gaming industry, building high-performance, real-time platforms that power immersive digital experiences. We’re looking for a Principal Java Engineer to lead the design and development of scalable backend systems that support live, data-intensive applications. This is a hybrid role based in Kolkata, ideal for someone who thrives on solving technical challenges, enjoys taking ownership, and wants to build great software in a dynamic, informal, and high-energy environment. Key Responsibilities Design and develop scalable, resilient backend systems using Java (17+) and Spring Boot Architect APIs, microservices, and real-time backend components for gaming platforms Own backend infrastructure, deployment pipelines, monitoring, and system performance Collaborate with product and delivery teams to translate ideas into production-ready features Take full ownership of backend architecture — from planning to delivery and iteration Continuously improve code quality, engineering practices, and overall system design Required Skills & Experience 10–15 years of experience in backend engineering with strong expertise in Java (preferably 17+) and Spring Boot Proven experience building high-performance, distributed systems at scale Hands-on with cloud platforms (AWS, GCP, or Azure), Docker, and Kubernetes Strong understanding of SQL and NoSQL databases, caching (e.g., Redis), and messaging systems (Kafka, RabbitMQ) Solid skills in debugging, performance tuning, and system optimization Ability to work independently, make pragmatic decisions, and collaborate in a hybrid team setup Good to Have Experience in gaming, real-time platforms, or multiplayer systems Familiarity with Web Sockets, telemetry pipelines, or event-driven architecture Exposure to CI/CD pipelines, infrastructure as code, and observability tools Why Join Us? Work in a creative, fast-paced domain that blends engineering depth with product excitement Flat structure and high trust — focus on outcomes, not formalities Visible impact — everything you build will be used by real players in real time Informal, collaborative culture — where we take our work seriously, but not ourselves Flexible hybrid setup — 3 to 4 days a week in-office, with room for focused work and team alignment How to Apply Send your resume or portfolio to : talent@projectpietech.com We’d love to hear from engineers who are passionate about solving hard problems and building something exciting from the ground up. #HiringNow#JavaJobs#BackendEngineer#PrincipalEngineer#JavaDeveloper#SpringBoot #SoftwareEngineering#GamingIndustryJobs#RealTimeSystems#TechJobsIndia#KolkataJobs #EngineeringLeadership#MicroservicesArchitecture#CloudEngineering#JoinOurTeam #StartupJobs#ProjectPieTechnologies#JobAlert#NowHiring#WorkWithUs#CareerOpportunity #HiringEngineersJavaDeveloper#BackendDeveloper#Microservices#SoftwareArchitecture #CloudComputing#DistributedSystems#Kubernetes#Docker#Kafka#AWSJobs#DevOpsEngineering #NoSQL#WebSockets#RealTimeData#LifeAtProjectPie#JoinOurTeam#TechLeadership #InnovationDriven#BuildTheFuture#MakeAnImpact#EngineerTheFuture#TeamCulture#FlatHierarchy
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact We are seeking a skilled and experienced Software Engineer with expertise in Large Language Models (LLM), Java, Python, Kubernetes, Helm and cloud technologies like AWS. The ideal candidate will contribute to designing, developing, and maintaining scalable software solutions using microservices architecture. This role offers an exciting opportunity to work with cutting-edge technologies in a collaborative environment. What The Role Offers Design, develop, troubleshoot and debug software programs for software enhancements and new products. Integrate Large Language Models (LLMs) into business applications to enhance functionality and user experience. Develop and maintain transformer-based models. Develop RESTful APIs and ensure seamless integration across services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement best practices for cloud-native development using AWS services like EC2, Lambda, SageMaker, S3 etc. Deploy, manage, and scale containerized applications using Kubernetes (K8S) and Helm. Designs enhancements, updates, and programming changes for portions and subsystems of application software, utilities, databases, and Internet-related tools. Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Collaborates and communicates with management, internal, and outsourced development partners regarding software systems design status, project progress, and issue resolution. Represents the software systems engineering team for all phases of larger and more-complex development projects. Ensure system reliability, security, and performance through effective monitoring and troubleshooting. Write clean, efficient, and maintainable code following industry standards. Participate in code reviews, mentorship, and knowledge-sharing within the team. What You Need To Succeed Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 3-5 years of experience Strong understanding of Large Language Models (LLM) and experience applying them in real-world applications. Expertise in Elastic Search or similar search and indexing technologies. Expertise in designing and implementing microservices architecture. Solid experience with AWS services like EC2, VPC, ECR, EKS, SageMaker etc. for cloud deployment and management. Proficiency in container orchestration tools such as Kubernetes (K8S) and packaging/deployment tools like Helm. Strong problem-solving skills and the ability to troubleshoot complex issues. Strong experience in Java and Python development, with proficiency in frameworks like Spring Boot or Java EE. Should have good hands-on experience in designing and writing modular object-oriented code. Good knowledge of REST APIs, Spring, Spring boot, Hibernate. Excellent analytical, troubleshooting and problem-solving skills. Ability to demonstrate effective teamwork both within the immediate team and across teams. Experience in working with version control and build tools like GIT, GitLab, Maven and Jenkins, GitLab CI. Excellent communication and collaboration skills. Familiarity with Python for LLM-related tasks. Working knowledge in RAG Experience working with NLP frameworks such as Hugging Face, OpenAI, or similar. Knowledge of database systems like PostgreSQL, MongoDB, or DynamoDB. Experience with observability tools like Prometheus, Grafana, or ELK Stack. Experience in working with event-driven architectures and messaging systems (e.g., Kafka, RabbitMQ). Experience with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform, CloudFormation). Familiar with Agile framework/SCRUM development methodologies One Last Thing OpenText is more than just a corporation, it's a global community where trust is foundational, the bar is raised, and outcomes are owned. Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we don't just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities Successfully and independently deliver large-size projects, including scoping, planning, design, development, testing, rollout and maintenance. Write clean, concise, modular and well-tested code. Review code from junior engineers and provide constant and constructive feedback. Contribute to building and maintaining documentation related to the team's projects. Create high quality, loosely coupled, reliable and extensible technical designs. Actively understand trade-offs between different designs and apply the solution suited to the situation / requirements. Participate in the team's on-call rotation and lead the troubleshooting and resolution process of any issues related to the services/ work sub-streams/ products owned by your team. Constantly improve the health and quality of the services / code they work on, through set practices and new initiatives. Lead the cross-team collaborations for the projects they work on. Support hiring and on-boarding activities along with coaching and developing junior members in your team, and contribute to knowledge sharing. Must Have Qualifications and Experience: 4-6 years of hands-on experience in designing, developing, testing, and deploying small to mid-scale applications in any language or stack. 2+ years of recent and active software development experience. Good understanding of Golang. Able to use Go concurrency patterns and contribute to building reusable Go components. Strong experience in designing loosely coupled, reliable and extensible distributed services. Great understanding of clean architecture, S.O.L.I.D principles, and event-driven architecture. Experience with message broker services like SQS, Kafka, etc. Strong data modeling experience in Relational databases. Strong cross-team collaboration and communication skills. Self-driven with a passion for learning new things quickly, solving challenging problems, and the drive to get better with the support from the manager. Nice To Have A bachelor degree in computer science, information technology, or equivalent education. Experience with NoSQL databases.
Posted 1 week ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Senior Software Engineer – Backend (Python) 📍 Location: Hyderabad (Hybrid) 🕒 Experience: 5 – 12 years About the Role: We are looking for a Senior Software Engineer – Backend with strong expertise in Python and modern big data technologies. This role involves building scalable backend solutions for a leading healthcare product-based company. Key Skills: Programming: Python, Spark-Scala, PySpark (PySpark API) Big Data: Hadoop, Databricks Data Engineering: SQL, Kafka Strong problem-solving skills and experience in backend architecture Why Join? Hybrid work model in Hyderabad Opportunity to work on innovative healthcare products Collaborative environment with modern tech stack Keywords for Search: Python, PySpark, Spark, Spark-Scala, Hadoop, Databricks, Kafka, SQL, Backend Development, Big Data Engineering, Healthcare Technology
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Wissen Technology is Hiring for Java + Python Developer About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time. Job Summary: We’re looking for a versatile Java + Python Developer who thrives in backend development and automation. You'll be working on scalable systems, integrating third-party services, and contributing to high-impact projects across fin tech/data platforms/cloud-native applications Experience: 2-10 Years Location: Bengaluru Mode of work: Full time Key Responsibilities: Design, develop, and maintain backend services using Java and Python Build and integrate RESTful APIs, microservices, and data pipelines Write clean, efficient, and testable code across both Java and Python stacks Work on real-time, multithreaded systems and optimize performance Collaborate with DevOps and data engineering teams on CI/CD, deployment, and monitoring Participate in design discussions, peer reviews, and Agile ceremonies Required skills: 2–10 years of experience in software development Strong expertise in Core Java (8+) and Spring Boot Proficient in Python (data processing, scripting, API development) Solid understanding of data structures, algorithms, and multithreading Hands-on experience with REST APIs, JSON, SQL/NoSQL (PostgreSQL, MongoDB, etc.) Familiarity with Git, Maven/Gradle, Jenkins, Agile/Scrum Preferred Skills: Experience with Kafka, RabbitMQ, or message queues Cloud services (AWS, Azure, or GCP) Knowledge of data engineering tools (Pandas, NumPy, PySpark, etc.) Docker/Kubernetes familiarity Exposure to ML/AI APIs or DevOps scripting Wissen Sites: Website: www.wissen.com LinkedIn: https://www.linkedin.com/company/wissen-technology Wissen Leadership: https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership: https://www.wissen.com/articles/
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Darwinbox: Darwinbox is Asia's fastest-growing HR technology platform, designing the future of work by building the world's best HR tech, driven by a fierce focus on employee experience and customer success, and continuous, iterative innovation. We are the preferred choice of 1000+ global enterprises to manage their 4 million+ employees across 130+ countries. Darwinbox's new-age HCM suite competes with local as well as global players in the enterprise technology space (such as SAP, Oracle, and Workday). The firm has acquired notable customers ranging from large conglomerates to unicorn start-ups: Nivea, Starbucks, DLF, JSW, Adani Group, Crisil, CRED, Vedanta, Mahindra, Glenmark, Gokongwei Group, Mitra Adiperkasa, EFS Facilities Management, VNG Corporation, and many more. Our vision of building a world-class product company from Asia is backed by marquee global investors like Microsoft, Salesforce, Sequoia Capital, TCV, KKR, and Partners Group. Why Join Us? The rate at which our product and market presence are growing is unprecedented. We’re a Rocketship. We’re not planning on slowing down anytime soon. And, that’s why we need you! You’ll experience a culture of: Disproportionate Rewards for top performance Accelerated Growth in a hyper-growth environment Wellbeing First culture focused on employee care Continuous Learning and Professional Development Meaningful Relationships and a Collaborative Environment. Role Overview: We are looking for a highly skilled Engineering Architect to drive our platform's architectural vision, scalability, and reliability. You will work closely with engineering teams to design and implement robust, high-performance, secure solutions that align with our business objectives. Responsibilities: Define and implement the architectural roadmap, ensuring scalability, reliability, and security of the platform. Provide technical leadership and mentorship to development teams across backend and frontend technologies. Design and optimize microservices architecture, improving system performance and resilience. Evaluate and integrate emerging technologies to enhance platform capabilities. Ensure best practices in coding, security, and DevOps across the engineering teams. Collaborate with product managers and stakeholders to align technical decisions with business needs. Optimize cloud infrastructure on AWS and Azure for cost efficiency and performance. Lead technical reviews, troubleshoot complex issues, and provide solutions for performance bottlenecks. Requirements: 10+ years of experience in software engineering, with at least 4+ years in an architectural role. Strong expertise in backend technologies, including PHP, Node.js, and microservices architecture. Proficiency in front-end frameworks like Angular and TypeScript. Experience with MongoDB, database design, and query optimization. Deep understanding of cloud platforms (AWS & Azure) and DevOps best practices. Expertise in designing scalable, distributed systems with high availability. Strong knowledge of API design, authentication, and security best practices. Experience with containerization and orchestration tools like Docker and Kubernetes. Excellent problem-solving skills and ability to drive technical decisions. Preferred Qualifications: Experience in CI/CD pipelines and infrastructure as code. Knowledge of event-driven architectures and message queues (SQS, RabbitMQ, Kafka, etc.). Prior experience in a SaaS or enterprise product-based company. Strong leadership and mentoring skills to guide engineering teams.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description This role requires working from our local Hyderabad office 2-3x a week. ABOUT THE ROLE: We are seeking a talented individual to join our team as a Java Backend Developer. The Java Backend Developer is self-driven and has a holistic, big picture mindset in developing enterprise solutions. In this role, he/she will be responsible for designing modern domain-driven, event-driven Microservices architecture to host on public Cloud platforms (AWS) and integration with modern technologies such as Kafka for event management/streaming, Docker & Kubernetes for Containerization. You will also be responsible for developing and supporting applications in Billing, Collections, and Payment Gateway within the commerce and club management Platform include assisting with the support of existing services as well as designing and implementing new business solutions, application deployment utilizing a thorough understanding of applicable technology, tools, and existing designs. The work involves working with product teams, technical leads, business analysts, DBAs, infrastructure, and other cross-department teams to evaluate business needs and provide end-to-end technical solutions. WHAT YOU’LL DO: Acting as a Java Backend Developer in a development team; collaborate with other team members and contribute in all phases of Software Development Life Cycle (SDLC) Applying Domain Driven Design, Object Oriented Design, and proven Design Patterns Hand on coding and development following Secured Coding guidelines and Test-Driven Development Working with QA teams to conduct integrated (application and database) stress testing, performance analysis and tuning Support systems testing and migration of platforms and applications to production Making enhancements to existing web applications built using Java and Spring frameworks Ensure quality, security and compliance requirements are met Act as an escalation point for application support and troubleshooting Have passion for hands-on coding, putting the customer first, and delivering an exceptional and reliable product to ABC Fitness’s customers Taking up tooling, integrating with other applications, piloting new technology Proof of Concepts and leveraging the outcomes in the ongoing solution initiatives Curious to see where technology and the industry is going and constantly strive to keep up through personal projects Strong analytical skills with high attention to detail, accuracy, and expert in debugging issue, and root cause analysis Strong organizational, multi-tasking, and prioritizing skills WHAT YOU’LL NEED: Computer Science degree or equivalent work experience Work experience as a senior developer in a team environment 3+ years of application development and implementation experience 3+ years of Java experience 3+ years of Spring experience Work experience in an Agile development scrum team space Work experience creating or maintaining RESTful or SOAP web services Work Experience creating and maintaining Cloud enabled/cloud native distributed applications Knowledge of API Gateways and integration frameworks, containers, and container orchestration Knowledge and experience with system application troubleshooting, and quality assurance application testing A focus on delivering outcomes to customers, which encompass designing, coding, ensuring quality, and delivering changes to our customers AND IT’S GREAT TO HAVE: 2+ years of SQL experience Billing or Payment Processing industry experience Knowledge and understanding of DevOps principles Knowledge and understanding of Cloud computing, PaaS design principles and micro services and containers Knowledge and understanding of application or software security such as: web application penetration testing, secure code review, secure static code analysis Ability to simultaneously lead multiple projects Good verbal, written, and interpersonal communication skills WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
Posted 1 week ago
7.0 years
0 Lacs
India
Remote
Job Title: Node.js Developer Location: Remote Experience: 7 Years Employment Type: Full-time Job Summary: We are looking for a highly skilled Senior Node.js Developer with 6+ years of experience in designing, developing, and deploying scalable backend applications. The ideal candidate should have deep expertise in Node.js, Express.js, databases (SQL & NoSQL), and cloud services . You will play a crucial role in architecting solutions, optimizing performance, and ensuring high-quality code. Key Responsibilities: Develop and maintain backend services using Node.js, Express.js, and Nest.js . Design RESTful APIs and integrate third-party services. Implement microservices architecture for scalability and efficiency. Work with databases such as MongoDB, PostgreSQL, MySQL, or Redis . Write efficient, reusable, and testable code following best practices. Optimize applications for performance and scalability . Collaborate with frontend developers, DevOps, and other team members . Implement authentication and authorization mechanisms using JWT, OAuth, or similar technologies. Ensure security best practices in API and backend development. Work with CI/CD pipelines and deployment strategies. Troubleshoot and debug issues in production and staging environments. Write unit and integration tests using Jest, Mocha, or Chai. Required Skills & Qualifications: 6+ years of experience in Node.js backend development . Strong expertise in Express.js, Nest.js, or Koa.js . Proficiency in JavaScript, TypeScript , and modern ES6+ features. Experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis) databases . Knowledge of message queues like RabbitMQ, Kafka, or Redis Pub/Sub. Familiarity with Docker, Kubernetes, and cloud platforms (AWS, GCP, or Azure) . Hands-on experience with GraphQL (Apollo, Hasura) is a plus . Experience in writing unit and integration tests . Strong problem-solving and debugging skills. Excellent understanding of asynchronous programming and event-driven architectures . Familiarity with DevOps practices and CI/CD pipelines. Preferred Skills: Experience with Serverless Frameworks (AWS Lambda, Firebase Functions) . Knowledge of WebSockets and real-time communication . Exposure to Terraform, Ansible, or other Infrastructure as Code (IaC) tools . Experience with performance monitoring tools like Prometheus, Grafana, or Datadog .
Posted 1 week ago
11.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 11+years. Strong working experience with architecture and development in Java 8 or higher. Experience with front-end frameworks such as React, Redux, Angular, or Vue. Familiarity with Node.js and modern backend stacks. Deep knowledge of AWS, Azure, or GCP platforms and services. Hands-on experience with CI/CD pipelines, containerization (Docker, Kubernetes), and microservices. Deep understanding of design patterns, data structures, and microservices architecture. Strong knowledge of object-oriented programming, data structures, and algorithms. Experience with scalable system design, performance tuning, and application security. Experience integrating with SAP ERP systems, Net Revenue Management platforms, and O9 Familiarity with data integration patterns, middleware, and message brokers (e.g., Kafka, RabbitMQ). A good understanding of UML and design patterns. Excellent communication and stakeholder management skills. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design & Build high quality APIs that are scalable and global at the core Build custom policies, frameworks/components, error handling, transaction tracing Setup Exchange catalogue orgs and assets in Any Point Platform. Setup security models and policies for consumers and producers of API and catalog assets Work across various platforms and the associated stakeholders’/business users Design, develop, test and implement technical solutions based on business requirements and strategic direction. Collaborate with other Development teams, Enterprise Architecture and Support teams to design, develop, test and maintain the various platforms and their integration with other systems Communicate with technical and non-technical groups on a regular basis as part of product/project support Responsible to support production releases/support on need basis. Peer Review, CI/CD pipeline implementation and Service monitoring. ITSO Delegate for the application/s. Should have flexible in working hours, ready to work in shift and On call once in a month 24*7 one week on-call production support including weekends. Requirements To be successful in this role, you should meet the following requirements: Person should have more than 8 years of experience in s/w development, design using java/j2ee technologies with hands on experience on complete spring stack and API implementation on Cloud (GCP/AWS) Should have hands on experience on K8 (Kubernetes) / DOCKERS. Experience in MQ, Sonar, API Gateway Experience in developing large-scale integration and API solutions Experience in working with API Management, ARM, Exchange and Access Management modules Experience in understanding and analyzing complex business requirements and carry out the system design accordingly. Extensive knowledge on building REST based APIs. Good Knowledge on API documentation (RAML/Swagger/OAS) Extensive knowledge on micro-services architecture with hands-on experience in implementing the same using Spring-boot. Good knowledge on security, scaling, performance tuning aspects of micro services Good understanding of SQL/NoSQL Databases. Good understanding of Messaging platform like Kafka, PubSub etc. Optional understanding of Cloud platforms. Fair understanding of DevOps concepts Experience in creating custom policies and custom connectors Excellent verbal and written communication skills, both technical and non-technical. Work on POCs Experience to handle the support projects. Spring boot, ORM tool knowledge (e.g. Hibernate), Web Services You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
15.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Technology Job Family Group: IT&S Group Job Description: As a Staff Software Engineer, you are the lead to the teams creating Enterprise Integration solutions for BP colleagues and external users. Your team’s mission is to be the digital provider of choice to your area of BP – delivering innovation at speed where it's wanted, and day-in-day-out reliability where it's needed. You will operate in a dynamic and commercially focussed environment, with the resources of one of the world's largest Digital organisations and leading Digital and IT vendors working with you. You will be part of growing and strengthening our technical talent base – experts coming together to solve BP and the world’s problems. Key Accountabilities Delivery of stable and efficient Integration solutions including implementing new solutions and technical debt management/remediation of existing platforms. We believe in DevOps – you build it, you run it! Ensure Integration Services in scope of role evolve in response to changing business needs, technology developments and maintain alignment to bp standard operating environments and emerging technologies Working with functional stakeholders, project managers and business analysts to understand requirements Lead a team of Integration engineers promoting a culture of agility, continuous improvement and embrace opportunities provided through increased automation. Maximise value from current applications and emerging technologies showing technical thought leadership in their business area across a wide range of technologies. Works with users and business analysts to understand requirements. Collaborates with peers across I&E teams and mentors more junior engineers Work location Pune Years of experience: 15+ years, with a minimum of 10 years of relevant experience. Required Criteria Expert in Java, integration frameworks, should be able to design highly scalable integrations which involves with API, Messaging, Files, Databases, and cloud services Experienced in leading multiple technology squads of engineers. Experienced in Integration tools like TIBCO/MuleSoft, Apache Camel/ Spring Integration, Confluent Kafka...etc. Expert in Enterprise Integration Patterns (EIPs) and iBlocks to build secure integrations Willingness and ability to learn, to become skilled in at least one more cloud-native (AWS and Azure) integration solutions on top of your existing skillset. Deep understanding of the Interface development lifecycle, including design, security, design patterns for extensible and reliable code, automated unit and functional testing, CI/CD and telemetry Strong experience in open-source technologies and able to adopt AI assisted development. Experienced in enterprise integrations, EDA, and Micro Services Architecture Strong inclusive leadership and people management Stakeholder Management Embrace a culture of continuous improvement Preferred Criteria Agile methodologies ServiceNow Risk Management AI assisted DevOps Monitoring and telemetry tools like Grafana, Open Telemetry User Experience Analysis Cybersecurity and compliance About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Additional Information We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Even though the job is advertised as full time, please contact the hiring manager or the recruiter as flexible working arrangements may be considered. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agile Methodology, Agile Methodology, Agility core practices, Analytics, API and platform design, API Development, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Enterprise Integration Patterns, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting {+ 7 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Description Senior Data Engineer Our Enterprise Data & Analytics (EDA) is looking for an experienced Senior Data Engineer to join our growing data engineering team. You’ll work in a collaborative Agile environment using the latest engineering best practices with involvement in all aspects of the software development lifecycle. You will craft and develop curated data products, applying standard architectural & data modeling practices to maintain the foundation data layer serving as a single source of truth across Zendesk . You will be primarily developing Data Warehouse Solutions in BigQuery/Snowflake using technologies such as dbt, Airflow, Terraform. What You Get To Do Every Single Day Collaborate with team members and business partners to collect business requirements, define successful analytics outcomes and design data models Serve as Data Model subject matter expert and data model spokesperson, demonstrated by the ability to address questions quickly and accurately Implement Enterprise Data Warehouse by transforming raw data into schemas and data models for various business domains using SQL & dbt Design, build, and maintain ELT pipelines in Enterprise Data Warehouse to ensure reliable business reporting using Airflow, Fivetran & dbt Optimize data warehousing processes by refining naming conventions, enhancing data modeling, and implementing best practices for data quality testing Build analytics solutions that provide practical insights into customer 360, finance, product, sales and other key business domains Build and Promote best engineering practices in areas of version control system, CI/CD, code review, pair programming Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Work with data and analytics experts to strive for greater functionality in our data systems Basic Qualifications What you bring to the role: 5+ years of data engineering experience building, working & maintaining data pipelines & ETL processes on big data environments 5+ years of experience in Data Modeling and Data Architecture in a production environment 5+ years in writing complex SQL queries 5+ years of experience with Cloud columnar databases (We use Snowflake) 2+ years of production experience working with dbt and designing and implementing Data Warehouse solutions Ability to work closely with data scientists, analysts, and other stakeholders to translate business requirements into technical solutions. Strong documentation skills for pipeline design and data flow diagrams. Intermediate experience with any of the programming language: Python, Go, Java, Scala, we primarily use Python Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred Qualifications Hands-on experience with Snowflake data platform, including administration, SQL scripting, and query performance tuning Good Knowledge in modern as well as classic Data Modeling - Kimball, Innmon, etc Demonstrated experience in one or many business domains (Finance, Sales, Marketing) 3+ completed “production-grade” projects with dbt Expert knowledge in python What Does Our Data Stack Looks Like ELT (Snowflake, Fivetran, dbt, Airflow, Kafka, HighTouch) BI (Tableau, Looker) Infrastructure (GCP, AWS, Kubernetes, Terraform, Github Actions) Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The Intelligent Heart Of Customer Experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request.
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: Senior Engineer 2 (SDET) Location: New Delhi Division: Ticketmaster Sport International Engineering Line Manager: Andrew French Contract Terms: Permanent THE TEAM Ticketmaster Sport is the global leader in sports ticketing. From the smallest clubs to the biggest leagues and tournaments, we are trusted as their ticketing partner. You will be joining the Ticketmaster Sports International Engineering division which is dedicated to the creation and maintenance of industry standard ticketing software solutions. Our software is relied upon by our clients to manage and sell their substantial ticketing inventories. Our clients include some of the highest profile clubs and organisations in sport. Reliability, quality, and performance are expected by our clients. We provide an extensive catalogue of hosted services including back-office tooling, public-facing web sales channels, and other services and APIs. The team you will join is closely involved in all these areas. The Ticketmaster Sports International Engineering division comprises distributed software development teams working together in a highly collaborative environment. You will be joining our expanding engineering team based in New Delhi. THE JOB You will be joining a Microsoft .Net development team as a Senior Quality Assurance Engineer. The team you will be joining is responsible for data engineering in the Sport platform. This includes developing back-end systems which integrate with other internal Ticketmaster systems, as well as with our external business partners. You will be required to work with event-driven systems, message queueing, API development, and much more besides. There is a tremendous opportunity for you to make a difference. We are looking for QA engineers who can help us drive our platform forward from a quality assurance point of view, as well as act as a mentor for more junior members of the team. You will be working very closely with the team lead to ensure the quality of our software and to assist in the planning and decision-making process. Apart from standard manual testing activities you will help improve our automated test suites, as well as be involved with performance testing. In essence, your job will be to ensure our software solutions are of the highest quality, robustness, and performance. What You Will Be Doing Design, build, and maintain scalable and reusable test automation frameworks using C# .Net and Selenium. Collaborate with developers, product managers, and QA to understand requirements and build comprehensive test plans. Defining, developing, and implementing quality assurance practices and procedures and test plans. Create, execute, and maintain automated functional, regression, integration and performance tests. Ensure high code quality and testing standards across the team through code reviews and best practices. Investigate test failures, diagnose bugs and file detailed bug reports Producing test and quality reports. Integrate test automation with CI/CD pipelines (GitLab, Azure Devops, Jenkins). Operating effectively within an organisation with teams spread across the globe. Working effectively within a dynamic team environment to define and advocate for QA standards and best practices to ensure the highest level of quality. Technical Skills Must have: 5+ years of experience in test automation development, preferably in an SDET role Strong hands-on experience with C# .Net and Selenium Webdriver. Experience in tools like NUnit, Specflow, or similar test libraries. Solid understanding of object-oriented programming (OOP) and software design principles. Experience developing and maintaining custom automation frameworks from scratch. Proficiency in writing clear, concise and comprehensive test cases and test plans. Experience of working in scrum teams within Agile methodology. Experience in developing regression and functional test plans, managing defects. Understand Business requirements and identify scenarios of Automated and manual testing Experience in performance testing using Gatling. Experience working with Git CI/CD pipelines. Experience with web service e.g. RESTful services testing including test automation with Rest Assured/Postman. Be proficient working with relational databases such as MSSQL or other relational databases. A deep understanding of Web protocols and standards (e.g. HTTP, REST). Strong problem-solving mindset and a detail-oriented mindset. Nice to have: Exposure to performance testing tools Testing enterprise applications deployed to cloud environments such as AWS. Experience on static code analysis tools like SonarQube etc. Building test infrastructures using containerisation technologies such as Docker and working with continuous delivery or continuous release pipelines. Experience in microservice development. Experience with Octopus Deploy. Experience with TestRail. Experience with event-driven architectures, messaging patterns and practices. Experience with Kafka, AWS SQS or other similar technologies. You (behavioural Skills) Excellent communication and interpersonal skills. We work with people all over the Globe using English as a shared language. As a senior engineer you will be expected to help managers make decisions by describing problems and proposing solutions. To be able to respond positively to challenge. Excellent problem-solving skills. Desire to take on responsibility and to grow as a quality assurance software engineer. Enthusiasm for technology and a desire to communicate that to your fellow team members. The ability to pick up any ad-hoc technology and run with it. Continuous curiosity for new technologies on the horizon. LIFE AT TICKETMASTER We are proud to be a part of Live Nation Entertainment, the world’s largest live entertainment company. Our vision at Ticketmaster is to connect people around the world to the live events they love. As the world’s largest ticket marketplace and the leading global provider of enterprise tools and services for the live entertainment business, we are uniquely positioned to successfully deliver on that vision. We do it all with an intense passion for Live and an inspiring and diverse culture driven by accessible leaders, attentive managers, and enthusiastic teams. If you’re passionate about live entertainment like we are, and you want to work at a company dedicated to helping millions of fans experience it, we want to hear from you. Our work is guided by our values: Reliability - We understand that fans and clients rely on us to power their live event experiences, and we rely on each other to make it happen. Teamwork - We believe individual achievement pales in comparison to the level of success that can be achieved by a team Integrity - We are committed to the highest moral and ethical standards on behalf of the countless partners and stakeholders we represent Belonging - We are committed to building a culture in which all people can be their authentic selves, have an equal voice and opportunities to thrive EQUAL OPPORTUNITIES We are passionate and committed to our people and go beyond the rhetoric of diversity and inclusion. You will be working in an inclusive environment and be encouraged to bring your whole self to work. We will do all that we can to help you successfully balance your work and homelife. As a growing business we will encourage you to develop your professional and personal aspirations, enjoy new experiences, and learn from the talented people you will be working with. It's talent that matters to us and we encourage applications from people irrespective of their gender, race, sexual orientation, religion, age, disability status or caring responsibilities.
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Job Title: Senior Software Development Engineer (Sr. SDE) Location: Noida About Us: At Clearwater Analytics, we are on a mission to become the world's most trusted and comprehensive technology platform for investment management, reporting, accounting, and analytics. We partner with sophisticated institutional investors worldwide and are seeking a Software Development Engineer who shares our passion for innovation and client commitment. Role Overview: We are seeking a skilled Software Development Engineer with strong coding and design skills, as well as hands-on experience in cloud technologies and distributed architecture. This role focuses on delivering high-quality software solutions within the FinTech sector, particularly in the Front Office, OEMS, PMS, and Asset Management domains. Key Responsibilities: Design and develop scalable, high-performance software solutions in a distributed architecture environment. Collaborate with cross-functional teams to ensure engineering strategies align with business objectives and client needs. Implement real-time and asynchronous systems with a focus on event-driven architecture. Ensure operational excellence by adhering to best practices in software development and engineering. Present technical concepts and project updates clearly to stakeholders, fostering effective communication. Requirements: 7 - 10 years of hands-on experience in software development, ideally within the FinTech sector. Strong coding and design skills, with a solid understanding of software development principles. Deep expertise in cloud platforms (AWS/GCP/Azure) and distributed architecture. Experience with real-time systems, event-driven architecture, and engineering excellence in a large-scale environment. Proficiency in Java and familiarity with messaging systems (JMS/Kafka/MQ). Excellent verbal and written communication skills. Desired Qualifications: Experience in the FinTech sector, particularly in Front Office, OEMS, PMS, and Asset Management at scale. Bonus: Experience with BigTech, Groovy, Bash, Python, and knowledge of GenAI/AI technologies. What we offer: Business casual atmosphere in a flexible working environment Team-focused culture that promotes innovation and ownership Access cutting-edge investment reporting technology and expertise Defined and undefined career pathways, allowing you to grow your way Competitive medical, dental, vision, and life insurance benefits Maternity and paternity leave Personal Time Off and Volunteer Time Off to give back to the community RSUs, as well as an employee stock purchase plan and a 401 (k) with a match Work from anywhere 3 weeks out of the year Work from home Fridays Why Join Us? This is an incredible opportunity to be part of a dynamic engineering team that is shaping the future of investment management technology. If you're ready to make a significant impact and advance your career, apply now!
Posted 1 week ago
20.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieve? Read on. At UKG, you get more than just a job. You get to work with purpose. Our team of U Krewers are on a mission to inspire every organization to become a great place to work through our award-winning HR technology built for all. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. Job Description We are seeking a seasoned Senior Director of Software Engineering with deep expertise in Data Platforms to lead and scale our data engineering organization. With deep industry experience, you will bring strategic vision, technical leadership, and operational excellence to drive innovation and deliver robust, scalable, and high-performing data solutions. You will partner closely with cross-functional teams to enable data-driven decision-making across the enterprise. Key Responsibilities Define and execute the engineering strategy for modern, scalable data platforms. Lead, mentor, and grow a high-performing engineering organization. Partner with product, architecture, and infrastructure teams to deliver resilient data solutions. Drive technical excellence through best practices in software development, data modeling, security, and automation. Oversee the design, development, and deployment of data pipelines, lakehouses, and real-time analytics platforms. Ensure platform reliability, availability, and performance through proactive monitoring and continuous improvement. Foster a culture of innovation, ownership, and continuous learning. 20+ years of experience in software engineering with a strong focus on data platforms and infrastructure. Proven leadership of large-scale, distributed engineering teams. Deep understanding of modern data architectures (e.g., data lakes, lakehouses, streaming, warehousing). Proficiency in cloud-native data platforms (e.g., AWS, Azure, GCP), big data ecosystems (e.g., Spark, Kafka, Hive), and data orchestration tools. Strong software development background with expertise in one or more languages such as Python, Java, or Scala. Demonstrated success in driving strategic technical initiatives and cross-functional collaboration. Strong communication and stakeholder management skills at the executive level. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field (Ph.D. a plus). Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation in the Application and Interview Process For individuals with disabilities that need additional assistance at any point in the application and interview process, please email UKGCareers@ukg.com
Posted 1 week ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Your IT Future, Delivered. Senior Software Engineer (AI/ML Engineer) With a global team of 5600+ IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. All our offices have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. At DHL IT Services, we are designing, building and running IT solutions for the whole DPDHL globally. Grow together. The AI & Analytics team builds and runs solutions to get much more value out of our data. We help our business colleagues all over the world with machine learning algorithms, predictive models and visualizations. We manage more than 46 AI & Big Data Applications, 3.000 active users, 87 countries and up to 100,000,000 daily transaction. Integration of AI & Big Data into business processes to compete in a data driven world needs state of the art technology. Our infrastructure, hosted on-prem and in the cloud (Azure and GCP), includes MapR, Airflow, Spark, Kafka, jupyter, Kubeflow, Jenkins, GitHub, Tableau, Power BI, Synapse (Analytics), Databricks and further interesting tools. We like to do everything in an Agile/DevOps way. No more throwing the “problem code” to support, no silos. Our teams are completely product oriented, having end to end responsibility for the success of our product. Ready to embark on the journey? Here’s what we are looking for: Currently, we are looking for AI / Machine Learning Engineer . In this role, you will have the opportunity to design and develop solutions, contribute to roadmaps of Big Data architectures and provide mentorship and feedback to more junior team members. We are looking for someone to help us manage the petabytes of data we have and turn them into value. Does that sound a bit like you? Let’s talk! Even if you don’t tick all the boxes below, we’d love to hear from you; our new department is rapidly growing and we’re looking for many people with the can-do mindset to join us on our digitalization journey. Thank you for considering DHL as the next step in your career – we do believe we can make a difference together! What will you need? University Degree in Computer Science, Information Systems, Business Administration, or related field. 2+ years of experience in the Data Scienctist / Machine Learning Engineer role Strong analytic skills related to working with structured, semi structured and unstructured datasets. Advanced Machine learning techniques: Decision Trees, Random Forest, Boosting Algorithm, Neural Networks, Deep Learning, Support Vector Machines, Clustering, Bayesian Networks, Reinforcement Learning, Feature Reduction / engineering, Anomaly deduction, Natural Language Processing (incl. sentiment analysis, Topic Modeling), Natural Language Generation. Statistics / Mathematics: Data Quality Analysis, Data identification, Hypothesis testing, Univariate / Multivariate Analysis, Cluster Analysis, Classification/PCA, Factor Analysis, Linear Modeling, Time Series, distribution / probability theory and/or Strong experience in specialized analytics tools and technologies (including, but not limited to) Lead the integration of large language models into AI applications. Very good in Python Programming. Power BI, Tableau Develop the application and deploy the model in production. Kubeflow, ML Flow, Airflow, Jenkins, CI/CD Pipeline. As an AI/ML Engineer, you will be responsible for developing applications and systems that leverage AI tools, Cloud AI services, and Generative AI models. Your role includes designing cloud-based or on-premises application pipelines that meet production-ready standards, utilizing deep learning, neural networks, chatbots, and image processing technologies. Professional & Technical Skills: Essential Skills: Expertise in Large Language Models. Strong knowledge of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Practical experience with various machine learning algorithms, including linear regression, logistic regression, decision trees, and clustering techniques. Proficient in data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Awareness of Apache Spark, Hadoop Awareness of Agile / Scrum ways of working. Identify the right modeling approach(es) for given scenario and articulate why the approach fits. Assess data availability and modeling feasibility. Review interpretation of models results. Experience in Logistic industry domain would be added advantage. Roles & Responsibilities: Act as a Subject Matter Expert (SME). Collaborate with and manage team performance. Make decisions that impact the team. Work with various teams and contribute to significant decision-making processes. Provide solutions to challenges that affect multiple teams. Lead the integration of large language models into AI applications. Research and implement advanced AI techniques to improve system performance. Assist in the development and deployment of AI solutions across different domains. You should have: Certifications in some of the core technologies. Ability to collaborate across different teams/geographies/stakeholders/levels of seniority. Customer focus with an eye on continuous improvement. Energetic, enthusiastic and results-oriented personality. Ability to coach other team members, you must be a team player! Strong will to overcome the complexities involved in developing and supporting data pipelines. Language requirements: English – Fluent spoken and written (C1 level) An array of benefits for you: Hybrid work arrangements to balance in-office collaboration and home flexibility. Annual Leave: 42 days off apart from Public / National Holidays. Medical Insurance: Self + Spouse + 2 children. An option to opt for Voluntary Parental Insurance (Parents / Parent -in-laws) at a nominal premium covering pre existing disease. In House training programs: professional and technical training certifications.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France