Jobs
Interviews

42 Low Latency Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an Emerging Tech Specialist at our Applied Research center, you will focus on applying scientific and technical research to solve practical problems and develop new products or processes within a specific industry or field. Your responsibilities will include conducting research, analyzing data, and developing solutions that can be implemented in real-world settings. You will define the research agenda and collaborate with academia and partners to conduct applied research. Additionally, you will be responsible for building tech playbooks that can be utilized by product and implementation teams. Your key responsibilities will involve researching emerging tech trends, the ecosystem of players, use cases, and their impact on client businesses. This will include scanning and curating startups, universities, and tech partnerships to create an innovation ecosystem. You will rapidly design and develop Proof of Concepts (PoCs) in emerging tech areas, sharing design specifications with team members, integrating, and testing components. Furthermore, you will contribute to thought leadership by developing showcases that demonstrate the application of emerging technologies in a business context. As part of the Applied Research Center activities, you will contribute to the design, development, testing, and implementation of proof of concepts in emerging tech areas. You will also be involved in problem definition and requirements analysis, developing reusable components, and ensuring compliance with coding standards and secure coding guidelines. Your role will also include innovation consulting, where you will understand client requirements and implement solutions using emerging tech expertise. Additionally, you will be responsible for talent management, mentoring the team to acquire identified emerging tech skills, and participating in demo sessions and hackathons. You will work with startups to provide innovative solutions to client problems and enhance our offerings. In terms of technical requirements, you should have expertise in various emerging areas such as Advanced AI, New Interaction Models, Platforms and Protocols, Cybersecurity, Quantum, Autonomous Machines, and Emerging Research areas like Brain AGI and Space Semicon. You will also need advanced theoretical knowledge, experimental design expertise, data analysis skills, prototype development capabilities, and research tool proficiency. Preferred skills include experience in User Experience Design, Artificial Intelligence, Cybersecurity Competency Management, Machine Learning, Robotics Algorithms, and X Reality (XR) technologies. Soft skills like a collaborative mindset, communication skills, problem-solving approach, intellectual curiosity, and commercial awareness will be beneficial for this role.,

Posted 2 days ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Lead Software Engineer Backend Were seeking a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, environmental, and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Djang Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 1 week ago

Apply

8.0 - 13.0 years

7 - 11 Lacs

Bengaluru

Work from Office

As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 1 week ago

Apply

8.0 - 13.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 1 week ago

Apply

8.0 - 13.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Senior Software Engineer Data Were seeking a Senior Software Engineer or a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, locations and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Senior Software Engineer or a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5-10 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Django Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in Kafka or any other stream message processing solutions. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 2 weeks ago

Apply

7.0 - 12.0 years

12 - 16 Lacs

Bengaluru

Work from Office

We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Job Description: Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django

Posted 2 weeks ago

Apply

10.0 - 20.0 years

90 - 100 Lacs

Bengaluru

Hybrid

Roles and Responsibilities Develop a comprehensive automation strategy aligned with organizational goals. Evaluate and select automation tools, integrating them into the existing framework. Automate network provisioning and configuration, from defining complex MPLS and QoS logic to configuring VLANs using intent-based automation. Define performance and monitoring metrics for automation pipelines. Ensure automation solutions are scalable and optimized to support growing network demands. Work with the Service Owner to document processes and provide training for network administrators and engineers. Continuously evaluate and enhance automation solutions to align with evolving business needs. Scripting and programming knowledge in Java, .NET, Terraform, Python, YAML, Ansible, and API development (hands-on coding is not required). Understanding of network concepts is beneficial. Gain deep knowledge of its technology landscape and leverage it to create strategic linkages. Lead teams by driving motivation, retention, and performance management while fostering an inclusive culture. Develop learning and development plans for teams and mentor junior professionals. Act as the go-to expert in their domain and expand knowledge across related domains. Align technology/domain strategy with business objectives and help implement strategic initiatives. Lead the delivery of complex and high-impact projects, ensuring quality outcomes. Work closely with other domain architects to break solutions into roadmaps and Epics. Guide the design and planning of Epics, translating them into actionable development journeys. Modify existing patterns to develop robust and scalable solutions. Represent professionally while building strong relationships with internal and external stakeholders. Drive collaboration across teams and communicate complex concepts effectively. Preferred candidate profile : Strong software development and delivery experience. Expertise in multiple programming languages and third-party software products (proficiency in at least two is required). Experience in multi-application management and integrating diverse technologies. Proven ability in stakeholder management, team leadership, and multi-group coordination.

Posted 2 weeks ago

Apply

10.0 - 20.0 years

85 - 100 Lacs

Bengaluru

Hybrid

Develop a comprehensive automation strategy aligned with organizational goals. Evaluate and select automation tools, integrating them into the existing framework. Automate network provisioning and configuration, from defining complex MPLS and QoS logic to configuring VLANs using intent-based automation. Define performance and monitoring metrics for automation pipelines. Ensure automation solutions are scalable and optimized to support growing network demands. Work with the Service Owner to document processes and provide training for network administrators and engineers. Continuously evaluate and enhance automation solutions to align with evolving business needs. Scripting and programming knowledge in Java, .NET, Terraform, Python, YAML, Ansible, and API development (hands-on coding is not required). Understanding of network concepts is beneficial. Gain deep knowledge of on technology landscape and leverage it to create strategic linkages. Lead teams by driving motivation, retention, and performance management while fostering an inclusive culture. Develop learning and development plans for teams and mentor junior professionals. Act as the go-to expert in their domain and expand knowledge across related domains. Align technology/domain strategy with business objectives and help implement strategic initiatives. Lead the delivery of complex and high-impact projects, ensuring quality outcomes. Work closely with other domain architects to break solutions into roadmaps and Epics. Guide the design and planning of Epics, translating them into actionable development journeys. Modify existing patterns to develop robust and scalable solutions. Represent our client professionally while building strong relationships with internal and external stakeholders. Drive collaboration across teams and communicate complex concepts effectively. Preferred candidate profile

Posted 2 weeks ago

Apply

10.0 - 16.0 years

40 - 85 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Principal Engineer Experience: 10 - 16 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Bengaluru Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python OR Scala OR Java AND Distributed Systems OR reduced latency OR Akka Netskope (One of Uplers' Clients) is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the team: We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka) . About Data Security Posture Management (DSPM) DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you'll do: Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. What you bring to the table: 10+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education: Bachelor's degree in Computer Science. Masters degree strongly preferred. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

10.0 - 16.0 years

40 - 85 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Principal Engineer Experience: 10 - 16 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Bengaluru Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python OR Scala OR Java AND Distributed Systems OR reduced latency OR Akka Netskope (One of Uplers' Clients) is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the team: We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka) . About Data Security Posture Management (DSPM) DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you'll do: Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. What you bring to the table: 10+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education: Bachelor's degree in Computer Science. Masters degree strongly preferred. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :In this role, you will work to increase the domain data coverage and adoption of the Data Platform by promoting a connected user experience through data. You will increase data literacy and trust by leading our Data Governance and Master Data Management initiatives. You will contribute to the vision and roadmap of self-serve capabilities through the Data Platform.The senior data engineer develops data pipelines extracting and transforming data as governed assets into the data platform, improves system quality by identifying issues and common patterns and developing standard operating procedures; and enhances applications by identifying opportunities for improvement, making recommendations, and designing and implementing systems.Roles and responsibilities:(MUST HAVE) Extensive experience with cloud data warehouse like Snowflake, AWS Athena, and SQL databases like PostgreSQL, MS SQL Server. Experience with NoSQL databases like AWS DynamoDB and Azure Cosmos is a plus.(MUST HAVE) Solid experience and clear understanding of DBT.(MUST HAVE) Experience working with AWS and/or Azure CI/CD DevOps technologies, and extensive debugging experience.Good understanding of data modeling, ETL, data curation, and big data performance tuning.Experience with data ingestion tools like Fivetran is a big plus.Experience with Data Quality and Observability tools like Monte Carlo is a big plus.Experience working and integrating with Event Bus like Pulsar is a big plus.Experience integrating with a Data Catalog like Atlan is a big plus.Experience with Business Intelligence tools like PowerBI is a plus.An understanding of unit testing, test driven development, functional testing, and performanceKnowledge of at least one shell scripting language.Ability to network with key contacts outside own area of expertise.Must possess strong interpersonal, organizational, presentation and facilitation skills.Must be results oriented and customer focused.Must possess good organizational skills.Technical experience & Professional attributes:Prepare technical design specifications based on functional requirements and analysis documents.Implement, test, maintain and support software, based on technical design specifications.Improve system quality by identifying issues and common patterns and developing standard operating procedures.Enhance applications by identifying opportunities for improvement, making recommendations, and designing and implementing systems.Maintain and improve existing codebases and peer review code changes.Liaise with colleagues to implement technical designs.Investigating and using new technologies where relevantProvide written knowledge transfer material.Review functional requirements, analysis, and design documents and provide feedback.Assist customer support with technical problems and questions.Ability to work independently with wide latitude for independent decision making.Experience in leading the work of others and mentor less experienced developers in the context of a project is a plus.Ability to listen and understand information and communicate the same.Participate in architecture and code reviews.Lead or participate in other projects or duties as need arises. Education qualifications:Bachelors degree in computer science, Information Systems, or related field; or equivalent combination of education/experience. Masters degree is a plus.5 years or more of extensive experience developing mission critical and low latency solutions.At least 3 years of experience with developing and debugging distributed systems and data pipelines in the cloud. Additional Information:The Winning Way behaviors that all employees need in order to meet the expectations of each other, our customers, and our partners. Communicate with Clarity - Be clear, concise and actionable. Be relentlessly constructive. Seek and provide meaningful feedback. Act with Urgency - Adopt an agile mentality - frequent iterations, improved speed, resilience. 80/20 rule better is the enemy of done. Dont spend hours when minutes are enough. Work with Purpose - Exhibit a We Can mindset. Results outweigh effort. Everyone understands how their role contributes. Set aside personal objectives for team results. Drive to Decision - Cut the swirl with defined deadlines and decision points. Be clear on individual accountability and decision authority. Guided by a commitment to and accountability for customer outcomes. Own the Outcome - Defined milestones, commitments and intended results. Assess your work in context, if youre unsure, ask. Demonstrate unwavering support for decisions.COMMENTS:The above statements are intended to describe the general nature and level of work being performed by individuals in this position. Other functions may be assigned, and management retains the right to add or change the duties at any time. Qualification 15 years full time education

Posted 2 weeks ago

Apply

10.0 - 16.0 years

40 - 85 Lacs

Bengaluru

Hybrid

Principal Engineer Experience: 10 - 16 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Bengaluru Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python OR Scala OR Java AND Distributed Systems OR reduced latency OR Akka Netskope (One of Uplers' Clients) is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the team: We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka) . About Data Security Posture Management (DSPM) DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you'll do: Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. What you bring to the table: 10+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education: Bachelor's degree in Computer Science. Masters degree strongly preferred. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

10.0 - 16.0 years

40 - 85 Lacs

Pune, Delhi / NCR, Mumbai (All Areas)

Hybrid

Principal Engineer Experience: 10 - 16 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Bengaluru Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python OR Scala OR Java AND Distributed Systems OR reduced latency OR Akka Netskope (One of Uplers' Clients) is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the team: We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka) . About Data Security Posture Management (DSPM) DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you'll do: Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. What you bring to the table: 10+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education: Bachelor's degree in Computer Science. Masters degree strongly preferred. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

10.0 - 16.0 years

40 - 85 Lacs

Bengaluru

Hybrid

Principal Engineer Experience: 10 - 16 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Bengaluru Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python OR Scala OR Java AND Distributed Systems OR reduced latency OR Akka Netskope (One of Uplers' Clients) is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the team: We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka) . About Data Security Posture Management (DSPM) DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you'll do: Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. What you bring to the table: 10+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education: Bachelor's degree in Computer Science. Masters degree strongly preferred. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

10.0 - 16.0 years

40 - 85 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Principal Engineer Experience: 10 - 16 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Bengaluru Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python OR Scala OR Java AND Distributed Systems OR reduced latency OR Akka Netskope (One of Uplers' Clients) is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the team: We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka) . About Data Security Posture Management (DSPM) DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you'll do: Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. What you bring to the table: 10+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education: Bachelor's degree in Computer Science. Masters degree strongly preferred. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

10.0 - 16.0 years

40 - 85 Lacs

Bengaluru, Delhi / NCR, Mumbai (All Areas)

Hybrid

Principal Engineer Experience: 10 - 16 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Bengaluru Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python OR Scala OR Java AND Distributed Systems OR reduced latency OR Akka Netskope (One of Uplers' Clients) is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description About the team: We are a distributed team of passionate engineers dedicated to continuous learning and building impactful products. Our core product is built from the ground up in Scala, leveraging the Lightbend stack (Play/Akka) . About Data Security Posture Management (DSPM) DSPM is designed to provide comprehensive data visibility and contextual security for the modern AI-driven enterprise. Our platform automatically discovers, classifies, and secures sensitive data across diverse environments including AWS, Azure, Google Cloud, Oracle Cloud , and on-premise infrastructure. DSPM is critical in empowering CISOs and security teams to enforce secure AI practices, reduce compliance risk, and maintain continuous data protection at scale. As a key member of the DSPM team, you will contribute to developing innovative and scalable systems designed to protect the exponentially increasing volume of enterprise data. Our platform continuously maps sensitive data across all major cloud providers, and on-prem environments, automatically detecting and classifying sensitive and regulated data types such as PII, PHI, financial, and healthcare information. It flags data at risk of exposure, exfiltration, or misuse and helps prioritize issues based on sensitivity, exposure, and business impact. What you'll do: Drive the enhancement of Data Security Posture Management (DSPM) capabilities, by enabling the detection of sensitive or risky data utilized in (but not limited to) training private LLMs or accessed by public LLMs. Improve the DSPM platform to extend support of the product to all major cloud infrastructures, on-prem deployments, and any new upcoming technologies. Provide technical leadership in all phases of a project, from discovery and planning through implementation and delivery. Contribute to product development: understand customer requirements and work with the product team to design, plan, and implement features. Support customers by investigating and fixing production issues. Help us improve our processes and make the right tradeoffs between agility, cost, and reliability. Collaborate with the teams across geographies. What you bring to the table: 10+ years of software development experience with enterprise-grade software. Must have experience in building scalable, high-performance cloud services. Expert coding skills in Scala or Java. Development on cloud platforms including AWS. Deep knowledge on databases and data warehouses (OLTP, OLAP) Analytical and troubleshooting skills. Experience working with Docker and Kubernetes. Able to multitask and wear many hats in a fast-paced environment. This week you might lead the design of a new feature, next week you are fixing a critical bug or improving our CI infrastructure. Cybersecurity experience and adversarial thinking is a plus. Expertise in building REST APIs. Experience leveraging cloud-based AI services (e.g., AWS Bedrock, SageMaker), vector databases, and Retrieval Augmented Generation (RAG) architectures is a plus. Education: Bachelor's degree in Computer Science. Masters degree strongly preferred. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 weeks ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Chennai

Work from Office

DTCC Digital Assets DTCC Digital Assets is at the forefront of driving institutional adoption of digital assets technology with a steadfast commitment to innovation anchored in security and stability. As the financial services industrys trusted technology partner, we pride ourselves on empowering a globally interconnected and efficient ecosystem.Our mission is to provide secure and compliant infrastructure for digital assets, enabling financial institutions to unlock the full potential of blockchain technology We are seeking an experienced and highly skilled Principal Data Engineer to join our dynamic team. As a Principal Data Engineer, you will play a crucial role in designing and building and growing our greenfield Snowflake Data Platform for Digital Assets. The Impact you will have in this role: Principal Data Engineer role is substantial in shaping the data infrastructure and strategic direction of the Digital Assets department. By leading the design and implementation of a greenfield Snowflake Data Platform, this role directly influences the organizations ability to manage and leverage data for operational efficiency and risk assessment. The Associate Director ensures that data systems are scalable, secure, and aligned with business goals, enabling faster decision-making and innovation. Their leadership in managing cross-functional teams and collaborating with stakeholders ensures that technical solutions are not only robust but also responsive to evolving business needs. Beyond technical execution, this role plays a pivotal part in fostering a culture of accountability, growth, and inclusion. By mentoring team members, driving employee engagement, and promoting best practices in agile development and data governance, the Associate Director helps build a resilient and high-performing engineering organization. Their contributions to incident management, platform adoption, and continuous improvement efforts ensure that the data platform remains reliable and future-ready, positioning the company to stay competitive in the rapidly evolving digital assets landscape. Role description: Lead engineering and development focused projects from start to finish with minimal supervision. Provide technical and operational support for our customer base as well as other technical areas within the company. Review and supervise the system design and architecture. Interact with stakeholder to understand requirements and provide solutions. Risk management functions such as reconciliation of vulnerabilities, security baselines as well as other risk and audit related objectives. Refine and prioritize the backlog for the team in partnership with product management. Groom and guide the team of employees and consultants. Responsible for employee engagement, growth and appraisals. Participate in user training to increase awareness of the platform. Ensure incident, problem and change tickets are addressed in a timely fashion, as well as escalating technical and managerial issues. Ensure quality and consistency of data from source systems and align with data product managers on facilitating resolution of these issues in a consistent manner. Follow DTCCs ITIL process for incident, change and problem resolution Talents Needed for Success Bachelors degree in Computer Science, Information Technology, Engineering (any) or related field 8 years of experience in the job or related position. Prior experience to include: 5 years of experience in managing data warehouses in a production environment. This includes all phases of lifecycle management: planning, design, deployment, upkeep, and retirement. 5 years leading development teams from with mix of onshore and offshore members. Experience designing and architecting data warehousing applications. Warehousing concepts involving fact and dimensions. Star/snowflake schemas and data integration methods and tools. Deep understanding of the Snowflake platform. Designing data pipelines. SQL and relational databases. Development in agile scrum teams. Development following CI/CD processes. Demonstrable experience with data streaming technologies like Kafka for data ingestion. Knowledge of Blockchain technologies, Smart Contracts and Financial Services a plus. Designing low latency data platforms a plus. Knowledge of Data Governance principles a plus. Optimize/Tune source streams, queries, and Power BI (or equivalent) Dashboards Leadership competencies Champion Inclusion - Embrace individual difference and create an environment of support, belonging and trust. Communicate Clearly - Listen to understand. Ask questions for clarity and deliver messages with purpose. Cultivate Relationships show care and compassion for others and authentically build networks across functions. Instill Ownership Ensure accountability, manage execution, and mitigate risk to deliver results. Inspire Growth Develop yourself and others through coaching, feedback, and mentorship to meet carer goals. Propel Change Think critically, respectfully challenge, and create innovative ways to drive growth.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

20 - 25 Lacs

Gurugram

Remote

Title: C++ Developer- Desktop Applications Location: Work from home Mandatory skills: 6-12 yrs in Desktop Applications- C++, Multithreading, Microservices, DSA We are looking for a Senior Software Developer-Desktop specializing in desktop application development with 6-10 years of experience. This role requires strong system-level programming skills, experience with microservices architecture, and an understanding of high-performance, low-latency applications. Key Responsibilities: Focus on data structures and algorithms (DSA) to ensure efficient system design Optimize performance, security, and memory management for desktop applications Develop backend integrations using microservices architecture, through APIs and socket.io Work with MongoDB for database management and optimize data handling Work on concepts like message queuing, caching solutions etc to optimize application performance Ensure code quality, security, and maintainability through code reviews and best practices Stay updated with the latest desktop development frameworks and trends Required Skills & Experience: 6-12 years of experience in software development, with a focus on desktop applications + Backend Strong expertise in C++ Development Good understanding of microservices architecture, queuing concepts, caching concepts Strong debugging, profiling, and system optimization skills Ability to work in a fast-paced, agile environment with a focus on scalability and security Preferred Qualifications: B.Tech / M.Tech Experience in high-performance multimedia streaming applications Interested individuals can apply here or share profile to hr@lancetechsolutions.com

Posted 4 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

Location : Bangalore, Mumbai We are looking for candidates with hands-on experience in Core Java, multi-threading, algorithms, data structure, SQL skills, React and Kafka. We are solving complex technical problems in the financial industry and need talented software engineers to join our mission and be a part of a global software development team. A brilliant opportunity to become a part of a highly motivated and expert team which has made a mark as a high-end technical consulting. Required Skills: Exp. - 5 to 10 years. Experience in Core Java 5.0 and above, CXF, Spring. Extensive experience in developing enterprise-scale n-tier applications for financial domain. Should possess good architectural knowledge and be aware of enterprise application design patterns. Should have the ability to analyze, design, develop and test complex, low-latency client-facing applications. Good development experience with RDBMS, preferably Sybase database. Good knowledge of multi-threading and high-volume server-side development. Experience in sales and trading platforms in investment banking/capital markets. Basic working knowledge of Unix/Linux. Excellent problem solving and coding skills in Java. Strong interpersonal, communication and analytical skills. Should have the ability to express their design ideas and thoughts.

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

What do you need for this opportunity? Must have skills required: APIS, Micro services, Backend Java, FIX protocol, low latency, Agile, C++, Scrum SoftSolutions! SRL is Looking for: Software Developer Consultant - C++ / Backend Java / Low Latency / FIX Protocol-Full remote SoftSolutions is a leading Italian company specializing in developing software solutions for regulated financial markets. Since 1997, we have supported major banks in optimizing bond issuance processes and trading fixed-income instruments. With our headquarters in Italy and collaborators across the globe, we stand out for our ability to innovate with cutting-edge technologies like nexRates, XTAuctions, and BestX:FI-A. Thanks to the quality of our solutions, we are the trusted partner of investment banks and global financial institutions. Do you dream of leaving your mark in the world of technology and finance? With SoftSolutions, you can make it happen. If you covered this function for some time, and now wants to work with a great and motivated company, in an international context, groundbreaking technology and exciting clients. We are looking for a talented and motivated Software Developer Consultant to join our team and work on our cutting-edge nexRates product suite. If you have a passion for software development and a deep understanding of financial trading systems, we would love to hear from you! Responsibilities: Develop, maintain, and enhance financial trading microservices as part of the nexRates product suite, utilizing C++, backend Java, low latency, and FIX protocol integration techniques. Collaborate with a cross-functional Scrum team to ensure high-quality development and timely delivery of features and improvements. Participate in regular code reviews, ensuring adherence to best practices and coding standards. Analyze and troubleshoot complex technical issues, identifying root causes and implementing solutions to improve system performance and stability. Assist in the integration of third-party systems and APIs, ensuring seamless and secure communication between components. Collaborate with product owners, architects, and other stakeholders to gather requirements, refine user stories, and prioritize tasks. Continuously stay up-to-date with industry trends and emerging technologies, proactively seeking opportunities to improve and optimize the nexRates product suite. Provide technical guidance and support to junior team members, fostering a culture of knowledge sharing and continuous improvement. Requirements: Bachelor's or Master's degree in Computer Science or a related field. Proven experience as a software developer with expertise in C++, BackEnd Java, Low Latency, and FIX protocol integration. Strong understanding of software development principles and practices, including Agile and Scrum methodologies. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication skills, including the ability to explain technical concepts to non-technical stakeholders. This is a full-time position. If you are a motivated software developer with a passion for developing innovative solutions, we encourage you to apply. Work from Home 100%. You will be required to work within CET timezone, with a minimum overlap of 7 hrs.

Posted 1 month ago

Apply

6.0 - 11.0 years

20 - 25 Lacs

Gurugram

Remote

Title: C++ Developer- Desktop Applications Location: Work from home Mandatory skills: 6-12 yrs in Desktop Applications- C++, Microservices, DSA We are hiring a Desktop Software Developer with strong system-level programming skills to build high-performance desktop applications and backend integrations. The ideal candidate should be experienced in C++ or Rust, with a solid understanding of microservices architecture, DSA, and low-latency systems. Key Responsibilities: - Design & develop cross-platform desktop apps using C++ / Rust /Python with Tauri / Webview2 - Implement efficient system designs using strong DSA principles - Optimize app performance, memory, and security - Integrate with microservices using APIs and socket.io - Work with MongoDB, caching, and messaging systems - Perform code reviews and ensure best practices in maintainability and security Required Skills: - 610 years in software development (desktop + backend) - Strong in C++ or Rust - Hands-on with MongoDB, microservices, caching/message queues - Familiar with Tauri, Webview2, system profiling/debugging - Agile mindset with focus on scalable, secure coding Interested individuals can apply here or share profile to hr@lancetechsolutions.com

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Pune

Work from Office

Job Title: KDB+ Developer Equities Electronic Trading Experience: 4 to 12 years Location: Pune Industry: Capital Markets / Investment Banking / FinTech Employment Type: Full Time Job Description: We are looking for a highly skilled KDB+ Developer to build and support microservices for low-latency, high-throughput trading workflows in global equities markets. Key Responsibilities: Build and maintain KDB microservices across Colo, Campus, and Cloud infrastructure Design real-time and historical analytics pipelines for market and execution data Support quant researchers , algo developers, and trading strategists with robust tools and data access Automate reporting and dashboarding for trading, risk, compliance , and regulatory use Implement QSPEC-based test automation for continuous delivery and system reliability Collaborate in Agile/DevOps environments and contribute to CI/CD and production change workflows Required Skills: Strong experience in KDB+/q and tick architecture for low-latency data environments Solid understanding of equities trading , market microstructure , and order types Proven domain experience in Electronic Trading , Systematic Market Making , or Agency Execution Hands-on with CI/CD pipelines , production support, and trading system automation Nice to Have: Familiarity with QSPEC , regulatory audit reporting, and hybrid infrastructure deployment

Posted 1 month ago

Apply

5.0 - 10.0 years

35 - 50 Lacs

Pune

Hybrid

Job Title: Sr. Software Engineer ( Fixed Income Sell Side) Experience Range: 5 to 10 Yr Location: Pune Roles & Responsibility Design and implement new features and enhancements for our Fixed Income EMS/OMS platform using Java and related technologies. Collaborate with product owners to define requirements and translate them into technical specifications. Write clean, efficient, and well-documented code, adhering to best practices and coding standards. Participate in code reviews and contribute to improving the overall code quality. Troubleshoot and resolve complex technical issues related to the platform. Optimise system performance and scalability to handle high transaction volumes. Contribute to the development and maintenance of automated testing frameworks. Stay up-to-date with the latest technologies and trends in fixed income trading and software development. Mentor and guide junior engineers. Contribute to the overall architecture and design of the platform. Qualifications Bachelor's or Master's degree in Computer Science or a related field. 5+ years of experience in software development, with a focus on Java. Strong understanding of object-oriented programming principles and design patterns. Proven experience in developing and implementing Fixed Income EMS/OMS platforms. Deep understanding of fixed income instruments (e.g., bonds, derivatives) and trading workflows. Knowledge of FIX protocol. Experience with messaging systems (e.g., Kafka, RabbitMQ) and distributed architectures. Proficiency in SQL and database technologies. Experience with Agile development methodologies. Excellent problem-solving and analytical skills. Candidates with experience with C++ will be highly preferred Knowledge of cloud computing platforms (AWS). Experience with performance tuning and optimization. Experience with building enterprise applications with Jakarta EE Strong communication and collaboration skills Competitive Benefits Offered By Our Client: Relocation Support: Our client offers an additional relocation allowance to assist with moving expenses. Comprehensive Health Benefits: Including medical, dental, and vision coverage. Flexible Work Schedule: Hybrid work model with an expectation of just 2 days on-site per week. Generous Paid Time Off (PTO): 21 days per year, with the ability to roll over 1 day into the following year. Additionally, 1 day per year is allocated for volunteering, 2 training days per year for uninterrupted professional development, and 1 extra PTO day during milestone years. Paid Holidays & Early Dismissals: A robust paid holiday schedule with early dismissal on select days, plus generous parental leave for all genders, including adoptive parents. Tech Resources: A rent-to-own program offering employees a company-provided Mac/PC laptop and/or mobile phone of their choice, along with a tech accessories budget for monitors, headphones, keyboards, and other office equipment. Health & Wellness Subsidies: Contributions toward gym memberships and health/wellness initiatives to support your well-being. Milestone Anniversary Bonuses: Special bonuses to celebrate key career milestones. Inclusive & Collaborative Culture: A forward-thinking, culture-based organisation that values diversity and inclusion and fosters collaborative teams.

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Hybrid

We are seeking a skilled Database Specialist with strong expertise in Time-Series Databases, specifically Loki for logs, InfluxDB, and Splunk for metrics. The ideal candidate will have a solid background in query languages, Grafana, Alert Manager, and Prometheus. This role involves managing and optimizing time-series databases, ensuring efficient data storage, retrieval, and visualization. Key Responsibilities: Design, implement, and maintain time-series databases using Loki, InfluxDB, and Splunk to store and manage high-velocity time-series data. Develop efficient data ingestion pipelines for time-series data from various sources (e.g., IoT devices, application logs, metrics). Optimize database performance for high write and read throughput, ensuring low latency and high availability. Implement and manage retention policies, downsampling, and data compression strategies to optimize storage and query performance. Collaborate with DevOps and infrastructure teams to deploy and scale time-series databases in cloud or on-premise environments. Build and maintain dashboards and visualization tools (e.g., Grafana) for monitoring and analyzing time-series data. Troubleshoot and resolve issues related to data ingestion, storage, and query performance. Work with development teams to integrate time-series databases into applications and services. Ensure data security, backup, and disaster recovery mechanisms are in place for time-series databases. Stay updated with the latest advancements in time-series database technologies and recommend improvements to existing systems. Key Skills: Strong expertise in Time-Series Databases with Loki (for logs), InfluxDB, and Splunk (for metrics).

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies