Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 3.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
We are looking for a software engineer to join OCI security & compliance platform team. The platform and algorithms monitor, detect threats, data breaches, and other malicious activities using machine learning and data science technologies. These services help organizations in maintaining their security and compliance posture. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Spark, Kafka, Machine Learning, technologies running on OCI. You'll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Career Level - IC2 . Develop highly available and scalable platform that aggregates and analyzes streams of events with small window of durability . Design, deploy and manage large scale data systems and services built on OCI . Develop, maintain and tune threat detection algorithms . Develop test bed and tools to help reduce noise and improve time to detect threats Desired Skills and Experience: . 1+ years of hands-on large-scale cloud application software development . 1+ years of experience in cloud infrastructure security and risk assessment . 1+ years of hands-on experience with three of the following technologies: Kafka, Radis, AWS, Kubernetes, Rest APIs, Linux . 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming . 1+ years of experience building application on Oracle Cloud Infrastructure . Critical thinking: ability to track down complex data and engineering issues, and analyze data to solve problems . Experience with development methodology with short release cycles. . Excellent problem solving and communication skills with both technical and non-technical audiences.
Posted 1 week ago
6.0 - 10.0 years
10 - 17 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Job Description: We are looking for a skilled Data / Analytics Engineer with hands-on experience in vector databases and search optimization techniques . You will help build scalable, high-performance infrastructure to support AI-powered applications like semantic search , recommendation systems , and RAG pipelines . Key Responsibilities: Optimize vector search algorithms for performance and scalability. Build pipelines to process high-dimensional embeddings (e.g., BERT , CLIP , OpenAI ). Implement ANN indexing techniques like HNSW , IVF , PQ . Integrate vector search with data platforms and APIs . Collaborate with cross-functional teams (data scientists, engineers, product). Monitor and resolve latency , throughput , and scaling issues. Must-Have Skills: Python AWS Vector Databases (e.g., Elasticsearch , FAISS , Pinecone ) Vector Search / Similarity Search ANN Search Algorithms HNSW , IVF , PQ Snowflake / Databricks Embedding Models – BERT , CLIP , OpenAI Kafka / Flink for real-time data pipelines REST APIs , GraphQL , or gRPC for integration Good to Have: Knowledge of semantic caching and hybrid retrieval Experience with distributed systems and high-performance computing Familiarity with RAG (Retrieval-Augmented Generation) workflows Apply Now if You: Enjoy solving performance bottlenecks in AI infrastructure Love working with cutting-edge ML models and search technologies Thrive in collaborative , fast-paced environments
Posted 1 week ago
6.0 - 11.0 years
6 - 11 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Strong experience with Java backend development , experience with large data processing applications using Flink/Beam Experience with GCP will be a plus. Experience Big Query or Oracle is needed Location: Virtual Experience : 6-9 Yrs Skills: Java, Apache Flink/Storm/Beam and GCP Note: Looking for Immediate to 30-Days joiners at most.
Posted 1 week ago
2.0 - 7.0 years
6 - 10 Lacs
Bengaluru
Work from Office
About the Position This is an opportunity for Engineering Managers to join our Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As the manager of the Data Foundations team in the Data Platform Group, your team will be responsible for designing, building, and deploying the foundational systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake and are now looking to adopt GCP. We are seeking an Engineering Manager with a strong technical background and excellent communication skills to join us and partner with senior leadership as a thought leader in our strategic Data & ML projects. Our platform projects have a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the data solutions to these problems. What you will be doing: Recruit and mentor a globally distributed and talented group of diverse employees Collaborate with Product, Design, QA, Documentation, Customer Support, Program Management, TechOps, and other scrum teams. Engage in technical design and discussions and also help drive technical architecture Ensure the happiness and productivity of the team s software engineers Communicate the vision of our product to external entities Help mitigate risk (technical, product, personnel) Utilize professional acumen to improve Okta s technology, product, and engineering Participate in relevant Engineering workgroups and on-call rotations Foster, enable and promote innovation Define team metrics and meet productivity goals of the organization Cloud infrastructure cost tracking and management in partnership with Okta s FinOps team What you will bring to the role: A track record of leading or managing high performing platform teams (2 year experience minimum) Experience with end to end project delivery; building roadmaps through operational sustainability Strong facilitation skills (design, requirements gathering, progress and status sessions) Production experience with distributed systems running in AWS. GCP a bonus Passion about automation and leveraging agile software development methodologies Prior experience with Data Platform Prior experience in software development with hands-on experience as an IC using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Developing and tuning highly scalable distributed systems Experience with reliability engineering specifically in areas such as data quality, data observability and incident management And extra credit if you have experience in any of the following! Deep Data & ML experience Multi-cloud experience Federal cloud environments / Fedramp Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Hybrid
About the Team The Data Platform team is responsible for the foundational data services, systems, and data products for Okta that benefit our users. Today, the Data Platform team solves challenges and enables: Streaming analytics Interactive end-user reporting Data and ML platform for Okta to scale Telemetry of our products and data Our elite team is fast, creative and flexible. We encourage ownership. We expect great things from our engineers and reward them with stimulating new projects, new technologies and the chance to have significant equity in a company. Okta is about to change the cloud computing landscape forever. About the Position This is an opportunity for experienced Software Engineers to join our fast growing Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As a member of the Data Platform team, you will be responsible for designing, building, and deploying the systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake. We are looking for experienced Software Engineers who can help design and own the building, deploying and optimizing the streaming infrastructure. This project has a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the solutions to these problems. Job Duties and Responsibilities: Design, implement and own data-intensive, high-performance, scalable platform components Work with engineering teams, architects and cross functional partners on the development of projects, design, and implementation Conduct and participate in design reviews, code reviews, analysis, and performance tuning Coach and mentor engineers to help scale up the engineering organization Debug production issues across services and multiple levels of the stack Required Knowledge, Skills, and Abilities: 5+ years of experience in object-oriented language, preferably Java Hands-on experience using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Experience in developing and tuning highly scalable distributed systems Excellent grasp of software engineering principles Solid understanding of multithreading, garbage collection and memory management Experience with reliability engineering specifically in areas such as data quality, data observability and incident management Nice to have Maintained security, encryption, identity management, or authentication infrastructure Leveraged major public cloud providers to build mission-critical, high volume services Hands-on experience in developing Data Integration applications for large scale (petabyte scale) environments with experience in both batch and online systems. Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop Experience developing Kubernetes based services on AWS Stack.
Posted 1 week ago
2.0 - 7.0 years
4 - 8 Lacs
Bengaluru
Work from Office
We are looking for experienced Software Engineers who can help design and own the building, deploying and optimizing the streaming infrastructure. This project has a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the solutions to these problems. Job Duties and Responsibilities: Design, implement and own data-intensive, high-performance, scalable platform components Work with engineering teams, architects and cross functional partners on the development of projects, design, and implementation Conduct and participate in design reviews, code reviews, analysis, and performance tuning Coach and mentor engineers to help scale up the engineering organization Debug production issues across services and multiple levels of the stack Required Knowledge, Skills, and Abilities: 2+ years of experience of software development Proficient in at least one language while comfortable in more than one of the backend languages, preferably Java or Typescript, Ruby, GoLang, Python. Have experience working with at least one of the database technologies - MySQL, Redis, or PostgreSQL. Demonstrable knowledge of computer science fundamentals with strong API Design skills. Comfortable working on a geographically distributed extended team. Brings the right attitude to the team: ownership, accountability, attention to detail, and customer focus. Track record of delivering work incrementally to get feedback and iterating over solutions. Comfortable in React or similar front-end UI stacks; if not comfortable yet, you are willing to learn Nice to have Experience using a cloud-based distributed computing technologies such as Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Maintained security, encryption, identity management, or authentication infrastructure Leveraged major public cloud providers to build mission-critical, high volume services Hands-on experience in developing Data Integration applications for large scale (petabyte scale) environments with experience in both batch and online systems. Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop
Posted 1 week ago
9.0 - 12.0 years
0 - 3 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Team: Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: The work you do will be used every single day, its the essential code youll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. Whats in it for you: Build a career with a global company. Work on code that fuels the global financial markets. Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 9-12 years of experience designing/building data-intensive solutions using distributed computing. Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design and writing robust maintainable architectures and APIs. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient programming skills at a high-level languages - Java, Scala, Python Solid knowledge of at least one machine learning research frameworks Familiarity with containerization, scripting, cloud platforms, and CI/CD. 5+ years experience with Python, Java, Kubernetes, and data and workflow orchestration tools 4+ years experience with Elasticsearch, SQL, NoSQL,??Apache spark, Flink, Databricks and Mlflow. Prior experience with operationalizing data-driven pipelines for large scale batch and stream processing analytics solutions Good to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e.g., Lucene Query Syntax) and experience with data indexing and retrieval. Experience with machine learning models and NLP techniques for search relevance and ranking. Familiarity with vector search techniques and embedding models (e.g., BERT, Word2Vec). Experience with relevance tuning using A/B testing frameworks. Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python, Java Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor.
Posted 1 week ago
8.0 - 13.0 years
40 - 65 Lacs
Bengaluru
Work from Office
About the team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. We’ve done this – with zero downtime! Sounds impossible? Well, that’s the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. We’ve taken steps to inculcate a strong ‘Founder’s Mindset’ across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As Engineering Manager, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we aren’t building unparalleled tech solutions, you can find us debating the plot points of our favourite books and games – or even gossipping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the role We are looking for a seasoned Engineering Manager well-versed with emerging technologies to join our team. As an Engineering Manager, you will ensure consistency and quality by shaping the right strategies. You will keep an eye on all engineering projects and ensure all duties are fulfilled. You will analyse other employees’ tasks and carry on collaborations effectively. You will also transform newbies into experts and build reports on the progress of all projects What you will do Design tasks for other engineers, keeping Meesho’s guidelines and standards in mind Keep a close look on various projects and monitor the progress Drive excellence in quality across the organisation and solutioning of product problems Collaborate with the sales and design teams to create new products Manage engineers and take ownership of the project while ensuring product scalability Conduct regular meetings to plan and develop reports on the progress of projects What you will need Bachelor's / Master’s in computer science At least 8+ years of professional experience At least 4+ years’ experience in managing software development teams Experience in building large-scale distributed Systems Experience in Scalable platforms Expertise in Java/Python/Go-Lang and multithreading Good understanding on Spark and internals Deep understanding of transactional and NoSQL DBs Deep understanding of Messaging systems – Kafka Good experience on cloud infrastructure - AWS preferably Ability to drive sprints and OKRs with good stakeholder management experience. Exceptional team managing skills Experience in managing a team of 4-5 junior engineers Good understanding on Streaming and real time pipelines Good understanding on Data modelling concepts, Data Quality tools Good knowledge in Business Intelligence tools Metabase, Superset, Tableau etc. Good to have knowledge - Trino, Flink, Presto, Druid, Pinot etc. Good to have knowledge - Data pipeline building
Posted 1 week ago
10.0 - 20.0 years
10 - 20 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Job description Should have extensive experience on the technical architecture, configuration options, and customization capabilities development experience for about 10-12 years Proven experience of successful implementation end to end IoT based solution in the industry like manufacturing, Retail & Pharma Experience in IoT based Smart City solution implementing End to End using Garnet Framewok Proficiency in cloud platforms (AWS, Azure, Google Cloud) Sound knowledge of microservices architecture and containerization (Docker, Kubernetes). Strong development skills using languages (Python,Java,C#,C++,Node.js,JSON, XML, and binary data formats), data processing (Kafka, Spark & Flink) and network configuration. Good to have proficiency in Big Data having hands on knowledge in Scala, and R for data analysis and manipulation and knowledge of NoSQL databases (e.g., MongoDB, Cassandra) and relational databases (e.g., MySQL, PostgreSQL) for data storage and retrieval .
Posted 1 week ago
5.0 - 10.0 years
0 - 4 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
HOW YOU WILL FULFILL YOUR POTENTIAL As a member of our team, you will: Partner globally with sponsors,usersand engineering colleagues across multiple divisions to plan and execute engineering projects and drive our product roadmaps, Have responsibility for managing and leadinga team of 8+junior and seniorsoftware developers across 1-3global locations. Be instrumental in implementing processes and procedures in order to maximize the quality and efficiency of the team. Manage significant projects and be involved inthe full life cycle; scoping,designing, implementing, testing, deploying, and maintaining software systems acrossour products. Work closely with engineers to review the DB design, queries and other ETL processes. Leverage varioustechnologies including; Java,Flink, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes and exposure to various SQL (preferably Postgresql)/NO-SQL databases . Be able to innovate and incubate new ideas. QUALIFICATIONS A successful candidate will possess the followingattributes: A Bachelor's or Master's degreein Computer Science, Computer Engineering, or a similar field of study. 9+ years of experience in software development includingmanagement experience. Experience in developing and designingend-to-end solutions toenterprise standardsincluding automated testingand SDLC. Sound knowledge of DBMS concepts, database architecture, experienced in ETL/data pipeline development. Experience in query tuning/optimization The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper.? Knowledge of financial industry is desirable but not essential. Experience in some ofthe following is desired and can set you apart from other candidates: UI/UX development API design, such as to create interconnected services, message buses or real time processing, relational databases knowledge of the financial industry and compliance or risk functions, influencingstakeholders.
Posted 1 week ago
4.0 - 8.0 years
5 - 9 Lacs
Hyderabad, Bengaluru
Work from Office
Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing Be a part of an Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities What are the next steps? Register on our Soul AI website
Posted 2 weeks ago
4.0 - 8.0 years
13 - 17 Lacs
Hyderabad, Bengaluru
Work from Office
Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities
Posted 2 weeks ago
5.0 - 10.0 years
3 - 7 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Designing, implementing, and optimizing CI/CD pipelines for cloud and hybrid environments. Integrating AI-driven pipeline automation for self-healing deployments and predictive troubleshooting. Leveraging GitOps (ArgoCD, Flux, Tekton) for declarative infrastructure management. Implementing progressive delivery strategies (Canary, Blue-Green, Feature Flags). Containerizing applications using Docker & Kubernetes (EKS, AKS, GKE, OpenShift, or on-prem clusters). Optimizing service orchestration and networking with service meshes (Istio, Linkerd, Consul). Implementing AI-enhanced observability for containerized services using AIOps-based monitoring. Automating provisioning with Terraform, CloudFormation, Pulumi, or CDK. Supporting and optimizing distributed computing workloads, including Apache Spark, Flink, or Ray. Using GenAI-driven copilots for DevOps automation, including scripting, deployment verification, and infra recommendations. The Impact You Will Have: Enhancing the efficiency and reliability of CI/CD pipelines and deployments. Driving the adoption of AI-driven automation to reduce downtime and improve system resilience. Enabling seamless application portability across on-prem and cloud environments. Implementing advanced observability solutions to proactively detect and resolve issues. Optimizing resource allocation and job scheduling for distributed processing workloads. Contributing to the development of intelligent DevOps solutions that support both traditional and AI-driven workloads. What You ll Need: 5+ years of experience in DevOps, Cloud Engineering, or SRE. Hands-on expertise with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Tekton, etc.). Strong experience with Kubernetes, container orchestration, and service meshes. Proficiency in Terraform, CloudFormation, Pulumi, or Infrastructure as Code (IaC) tools. Experience working in hybrid cloud environments (AWS, Azure, GCP, on-prem). Strong scripting skills in Python, Bash, or Go. Knowledge of distributed data processing frameworks (Spark, Flink, Ray, or similar)
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world's biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualised infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customer's business critical applications. Oracle Cloud Infrastructure (OCI) Security Platform & Compliance products team help customers protect their business-critical cloud infrastructure and data. We build cloud native security & compliance solutions that provide customers with visibility into the security posture of their cloud assets and help automate remediation where possible. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Kafka, Spark, Machine Learning, technologies running on OCI. You'll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Responsibilities : Develop highly available and scalable platform that aggregates and analyzes streams of events with small window of durability Design, deploy and manage large scale data systems and services built on OCI Develop, maintain and tune threat detection algorithms Develop test bed and tools to help reduce noise and improve time to detect threats Desired Skills and Experience : 4+ years of hands-on large-scale cloud application software development 1+ years of experience in cloud infrastructure security and risk assessment 1+ years of hands-on experience with three of the following technologies: Kafka, Spark, AWS/OCI, Kubernetes, Rest APIs, Linux 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming 1+ years of experience building application on OCI, AWS, Azure or GCP cloud Critical thinking: ability to track down complex data and engineering issues, and analyze data to solve problems Experience with development methodology with short release cycles. Excellent problem solving and communication skills with both technical and non-technical audiences. Optional Skills : Working knowledge of SSL, authentication, encryption, audit logging & access policies. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law. Oracle is an Affirmative Action-Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, protected veterans status, age, or any other characteristic protected by law. Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. As an active team member, providing solutions and helping others. BS or MS degree or equivalent experience relevant to functional area. 4+ years of software engineering or related experience. Career Level - IC3
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
We are looking for a software engineer to join OCI security & compliance platform team. The platform and algorithms monitor, detect threats, data breaches, and other malicious activities using machine learning and data science technologies. These services help organizations in maintaining their security and compliance posture. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Spark, Kafka, Machine Learning, technologies running on OCI. You'll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Career Level - IC2 . Develop highly available and scalable platform that aggregates and analyzes streams of events with small window of durability . Design, deploy and manage large scale data systems and services built on OCI . Develop, maintain and tune threat detection algorithms . Develop test bed and tools to help reduce noise and improve time to detect threats Desired Skills and Experience: . 1+ years of hands-on large-scale cloud application software development . 1+ years of experience in cloud infrastructure security and risk assessment . 1+ years of hands-on experience with three of the following technologies: Kafka, Radis, AWS, Kubernetes, Rest APIs, Linux . 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming . 1+ years of experience building application on Oracle Cloud Infrastructure . Critical thinking: ability to track down complex data and engineering issues, and analyze data to solve problems . Experience with development methodology with short release cycles. . Excellent problem solving and communication skills with both technical and non-technical audiences.
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
Remote
About the Role The Search platform currently powers Rider and Driver Maps, Uber Eats, Groceries, Fulfilment, Freight, Customer Obsession and many such products and systems across Uber. We are building a unified platform for all of Uber's search use-cases. The team is building the platform on OpenSearch. We are already supporting in house search infrastructure built on top of Apache Lucene. Our mission is to build a fully managed search platform while delivering a delightful user experience through low-code data and control APIs . We are looking for an Engineering Manager with strong technical expertise to define a holistic vision and help builda highly scalable, reliable and secure platform for Uber's core business use-cases. Come join our team to build search functionality at Uber scale for some of the most exciting areas in the marketplace economy today. An ideal candidate will be working closely with a highly cross-functional team, including product management, engineering, tech strategy, and leadership to drive our vision and build a strong team. A successful candidate will need to demonstrate strong technical skills, system architecture / design. Having experience on the open source systems and distributed systems is a big plus for this role. The EM2 role will require building a team of software engineers, while directly contributing on the technical side too. What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Provide technical leadership, influence and partner with fellow engineers to architect, design and build infrastructure that can stand the test of scale and availability, while reducing operational overhead. Lead, manage and grow a team of software engineers. Mentor and guide the professional and technical development of engineers on your team, and continuously improve software engineering practices. Own the craftsmanship, reliability, and scalability of your solutions. Encourage innovation, implementation of ground breaking technologies, outside-of-the-box thinking, teamwork, and self-organization Hire top performing engineering talent and maintaining our dedication to diversity and inclusion Collaborate with platform, product and security engineering teams, and enable successful use of infrastructure and foundational services, and manage upstream and downstream dependencies ---- Basic Qualifications ---- Bachelor's degree (or higher) in Computer Science or related field. 10+ years of software engineering industry experience 8+ years of experience as an IC building large scale distributed software systems Outstanding technical skills in backend: Uber managers can lead from the front when the situation calls for it. 1+ years for frontline managing a diverse set of engineers ---- Preferred Qualifications ---- Prior experience with Search or big data systems - OpenSearch, Lucene, Pinot, Druid, Spark, Hive, HUDI, Iceberg, Presto, Flink, HDFS, YARN, etc preferred. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. .Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .
Posted 2 weeks ago
5.0 - 8.0 years
11 - 15 Lacs
Mumbai
Work from Office
Summary Bitkraft Technologies LLP is looking for Technical Architect to join our software engineering team You will be working across the stack on cutting edge web development projects for our custom services business. As a Senior Technical Architect, you play a pivotal role in designing, developing, and implementing cutting-edge data processing solutions The ideal candidate will have a deep understanding of distributed systems, big data technologies, and real-time data processing frameworks. If you love solving problems, are a team player and want to work in a fast paced environment with core technical and business challenges, we would like to meet you. Essential Skills Deep understanding of big data technologies, including Hadoop, Spark, Kafka, and Flink. Proven experience in designing and implementing scalable, high-performance data processing solutions. Strong knowledge of real-time data processing concepts and frameworks. Familiarity with GraphQL and its use in API design. Familiarity with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes). Other Essential Skills / Requirements Great attention to detail Strong work ethic and commitment to meet deadlines and support team members meet goals Be flexible with working across time zones with overseas customers if required Ability to work independently and as part of a team. Desirable Skills Experience with machine learning frameworks and tools. Knowledge of data governance and compliance standards. Familiarity with CI/CD practices and DevOps methodologies. Key Responsibilities Provide technical leadership and guidance to development teams, ensuring adherence to best practices and architectural standards. Design and architect scalable, high-performance data processing solutions using technologies like Kafka, Spark, Stream Processing, batch Processing, Apache Flink, Hadoop, and GraphQL. Develop and optimize data pipelines to efficiently extract, transform, and load (ETL) data from various sources into target systems. Implement real-time data processing solutions using technologies like Kafka and Flink to enable rapid insights and decision-making. Design and implement batch processing workflows using Hadoop or other big data frameworks to handle large datasets. Create GraphQL APIs to expose data services, ensuring efficient and flexible data access. Research and evaluate new technologies and tools to stay abreast of industry trends and identify opportunities for improvement. Monitor and optimize the performance of data processing systems to ensure maximum efficiency and scalability. Collaborate with cross-functional teams (e.g., data scientists, data engineers, product managers) to deliver high-quality data solutions. Work closely with product management, data science, and operations teams to understand requirements and deliver data solutions that align with business goals. Experience 5 to 10 years About Bitkraft Technologies LLP Bitkraft Technologies LLP is an award winning Software Engineering Consultancy focused on Enterprise Software Solutions, Mobile Apps Development, ML/AI Solution Engineering, Extended Reality, Managed Cloud Services and Technology Skill-sourcing, with an extraordinary track record. We are driven by technology and push the limits of what can be done to realise the business needs of our customers. Our team is committed towards delivering products of the highest standards and we take pride in creating robust user-driven solutions that meet business needs. Bitkraft boasts of clients across over 10+ countries including US, UK, UAE, Oman, Australia and India to name a few. (ref:hirist.tech)
Posted 2 weeks ago
4.0 - 8.0 years
5 - 12 Lacs
Bengaluru
Work from Office
If interested apply here - https://forms.gle/sBcZaUXpkttdrTtH9 Key Responsibilities Work with Product Owners and various stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions and design the scale out architecture for data platform to meet the requirements of the proposed solution. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques, and business strategies. Play an active role in leading team meetings and workshops with clients. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own the technical product backlogs for data projects, help the team to close the backlogs in right time. Help us to shape the next generation of our products. Assess the effectiveness and accuracy of new data sources and data gathering techniques. Lead data mining and collection procedures Ensure data quality and integrity Interpret and analyze data problems Develop custom data models and algorithms to apply to data set Coordinate with different functional teams to implement models and monitor outcomes Develop processes and tools to monitor and analyze model performance and data accuracy Responsible to understand the client requirement and architect robust data platform on multiple cloud technologies. Responsible for creating reusable and scalable data pipelines Work with DE/DA/ETL/QA/Application and various other teams to remove roadblocks Align data projects with organizational goals. Skills & Qualifications Were looking for someone with 4-7 years of experience having worked through large data engineering porjects Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field. Strong problem-solving skills with an emphasis on product development Domain - Big Data, Data Platform, Distributed Systems Coding - any language (Java/scala/python) (most import requirement) with strong knowledge of Spark Ingestion skills - one of apache storm, flink, spark Streaming skills - one of kafka, kinesis, oplogs, binlogs, debizium Database skills – HDFS, Delta Lake/Iceberg, Lakehouse If interested apply here - https://forms.gle/sBcZaUXpkttdrTtH9
Posted 3 weeks ago
4.0 - 6.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
The Oracle Cloud Infrastructure (OCI) team can provide you the opportunity to build and operate a suite of massive scale, integrated cloud services in a broadly distributed, multi-tenant cloud environment. OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world's biggest challenges. We offer unique opportunities for smart, hands-on engineers with the expertise and passion to solve difficult problems in distributed highly available services and virtualised infrastructure. At every level, our engineers have a significant technical and business impact designing and building innovative new systems to power our customer's business critical applications. Oracle Cloud Infrastructure (OCI) Security Platform & Compliance products team help customers protect their business-critical cloud infrastructure and data. We build cloud native security & compliance solutions that provide customers with visibility into the security posture of their cloud assets and help automate remediation where possible. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Kafka, Spark, Machine Learning, technologies running on OCI. You'll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Responsibilities: . Develop highly available and scalable platform that aggregates and analyzes streams of events with small window of durability . Design, deploy and manage large scale data systems and services built on OCI . Develop, maintain and tune threat detection algorithms . Develop test bed and tools to help reduce noise and improve time to detect threats Desired Skills and Experience: . 4+ years of hands-on large-scale cloud application software development . 1+ years of experience in cloud infrastructure security and risk assessment . 1+ years of hands-on experience with three of the following technologies: Kafka, Spark, AWS/OCI, Kubernetes, Rest APIs, Linux . 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming . 1+ years of experience building application on OCI, AWS, Azure or GCP cloud . Critical thinking: ability to track down complex data and engineering issues, and analyze data to solve problems . Experience with development methodology with short release cycles. . Excellent problem solving and communication skills with both technical and non-technical audiences. Optional Skills: . Working knowledge of SSL, authentication, encryption, audit logging & access policies. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law. Oracle is an Affirmative Action-Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, protected veterans status, age, or any other characteristic protected by law. Career Level - IC3 Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. As an active team member, providing solutions and helping others. BS or MS degree or equivalent experience relevant to functional area. 4+ years of software engineering or related experience.
Posted 3 weeks ago
5.0 - 9.0 years
7 - 11 Lacs
Pune, Chennai, Bengaluru
Work from Office
We are seeking a skilled Java + Rust Developer to join our team , The ideal candidate will have expertise in backend development, containerization, microservices, and cloud platforms. This role requires strong problem-solving skills, the ability to work independently and in a team, and the flexibility to adapt to project requirements. The developer will be responsible for designing, developing, and optimizing scalable applications, integrating cloud-based solutions, and ensuring seamless system performance. Locaiton: Gurugram,Hyderabad,Mohali,Jaipur,Nagpur,Indore,Chandigarh,Mangalore, Trivandrum, Mysore
Posted 3 weeks ago
6.0 - 8.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
Remote
What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Code : Produces top-tier code, ensuring reliability, readability, efficiency, and testability. Conducts thorough code reviews and develops comprehensive tests and quality documentation in adherence to software engineering principles. Demonstrates proficiency in data structures, algorithms, programming languages, frameworks, and key phases of the software development life cycle. Proactively identifies, reports, and resolves technical issues in accordance with industry standards and best practices. Design : Applies software design principles and leverages knowledge of existing Uber software solutions to create, extend, or develop effective architectures aligned with project requirements. Anticipates and adapts to evolving design needs, evaluating trade-offs to deliver systems capable of meeting current and future demands. Execute : Drives technical and business impact by executing tasks with diligence and urgency. Plans, organizes, and manages resources efficiently to ensure timely delivery of work. Analyzes problems, evaluates alternatives, and takes responsibility for decisions while considering factors such as resources and costs. Collaborate : Fosters trusting and collaborative relationships across diverse teams, valuing each individual's unique contributions. Resolves conflicts by understanding different perspectives and aligns teams to achieve common goals. Provides constructive feedback in a respectful and impactful manner. ---- Basic Qualifications ---- We are looking for experienced smart engineers who are passionate about the domain and the technology. Those who have a track record of ownership, execution quality, and customer obsession. Bachelor's degree in Computer Science or related technical field or equivalent practical experience 6+ years Experience coding using general-purpose programming language (eg. C/C++, Java, Python, Go, C#, or Javascript) Strong experience in architecture design, high availability, and high-performance systems. Deep understanding of distributed systems. ---- Preferred Qualifications ---- Expertise in backend programming languages such as Python, Java, Go, or Node.js, with a deep understanding of building scalable and high-performance APIs and microservices architecture. Demonstrated ability to solve complex backend engineering challenges and design robust, fault-tolerant systems with a focus on security, scalability, and maintainability. Experience building and maintaining complex, large scale, highly available distributed systems Experience building and maintaining complex data processing pipelines using Spark, Flink, Hadoop, Hive, Storm, etc. Demonstrated ability to collaborate with others and promote an inclusive work environment Familiarity with agile methodologies We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. .Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .
Posted 3 weeks ago
7.0 - 9.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
Remote
What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Partner with engineers, analysts, and product managers to define technical solutions that support business goals Contribute to the architecture and implementation of distributed data systems and platforms Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost Serve as a thought leader and mentor in data engineering best practices across the organization ---- Basic Qualifications ---- 7+ years of hands-on experience in software engineering with a focus on data engineering Proficiency in at least one programming language such as Python, Java, or Scala Strong SQL skills and experience with large-scale data processing frameworks (e.g., Apache Spark, Flink, MapReduce, Presto) Demonstrated experience designing, implementing, and operating scalable ETL pipelines and data platforms Proven ability to work collaboratively across teams and communicate technical concepts to diverse stakeholders ---- Preferred Qualifications ---- Deep understanding of data warehousing concepts and data modeling best practices Hands-on experience with Hadoop ecosystem tools (e.g., Hive, HDFS, Oozie, Airflow, Spark, Presto) Familiarity with streaming technologies such as Kafka or Samza Expertise in performance optimization, query tuning, and resource-efficient data processing Strong problem-solving skills and a track record of owning systems from design to production We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. .Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .
Posted 3 weeks ago
4 - 9 years
11 - 15 Lacs
Kochi
Work from Office
We are looking for a highly skilled and experienced Data Management Lead (Architect) with 4 to 9 years of experience to design, implement, and manage data lake environments. The ideal candidate will have a strong background in data management, architecture, and analytics. ### Roles and Responsibility Design and implement scalable, secure, and high-performing data lake architectures. Select appropriate technologies and platforms for data storage, processing, and analytics. Define and enforce data governance, metadata management, and data quality standards. Collaborate with IT security teams to establish robust security measures. Develop and maintain data ingestion and integration processes from various sources. Provide architectural guidance and support to data scientists and analysts. Monitor the performance of the data lake and recommend improvements. Stay updated on industry trends and advancements in data lake technologies. Liaise with business stakeholders to understand their data needs and translate requirements into technical specifications. Create documentation and architectural diagrams to provide a clear understanding of the data lake structure and processes. Lead the evaluation and selection of third-party tools and services to enhance the data lake's capabilities. Mentor and provide technical leadership to the data engineering team. Manage the full lifecycle of the data lake, including capacity planning, cost management, and decommissioning of legacy systems. ### Job Requirements At least 4 years of hands-on experience in designing, implementing, and managing data lakes or large-scale data warehousing solutions. Proficiency with data lake technologies such as Hadoop, Apache Spark, Apache Hive, or Azure Data Lake Storage. Experience with cloud services like AWS (Amazon Web Services), Microsoft Azure, or Google Cloud Platform, especially with their data storage and analytics offerings. Knowledge of SQL and NoSQL database systems, including relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Expertise in data modeling techniques and tools for both structured and unstructured data. Experience with ETL (Extract, Transform, Load) tools and processes, and understanding of data integration and transformation best practices. Proficiency in programming languages commonly used for data processing and analytics, such as Python, Scala, or Java. Familiarity with data governance frameworks and data quality management practices to ensure the integrity and security of data within the lake. Knowledge of data security principles, including encryption, access controls, and compliance with data protection regulations (e.g., GDPR, HIPAA). Experience with big data processing frameworks and systems, such as Apache Kafka for real-time data streaming and Apache Flink or Apache Storm for stream processing. Familiarity with data pipeline orchestration tools like Apache Airflow, Luigi, or AWS Data Pipeline. Understanding of DevOps practices, including continuous integration/continuous deployment (CI/CD) pipelines, and automation tools like Jenkins or GitLab CI. Skills in monitoring data lake performance, diagnosing issues, and optimizing storage and processing for efficiency and cost-effectiveness. Ability to manage projects, including planning, execution, monitoring, and closing, often using methodologies like Agile or Scrum. Self-starter, independent-thinker, curious and creative person with ambition and passion. Bachelor's Degree: A bachelor's degree in Computer Science, Information Technology, Data Science, or a related field is typically required. This foundational education provides the theoretical knowledge necessary for understanding complex data systems. Master's Degree (optional): A master's degree or higher in a relevant field such as Computer Science, Data Science, or Information Systems can be beneficial. It indicates advanced knowledge and may be preferred for more senior positions. Certifications (optional): Industry-recognized certifications can enhance a candidate's qualifications. Examples include AWS Certified Solutions Architect, Azure Data Engineer Associate, Google Professional Data Engineer, Cloudera Certified Professional (CCP), or certifications in specific technologies like Apache Hadoop or Spark. PowerBI or any other reporting platform experience is a must. Knowledge on Power Automate, Qlik View, or any other reporting platform is an added advantage. ITIL Foundation certification is preferred.
Posted 1 month ago
3 - 5 years
25 - 35 Lacs
Bengaluru
Remote
Data Engineer Experience: 3 - 5 Years Exp Salary : Upto INR 35 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:30AM to 7:30PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Apache Airflow, Spark, AWS, Kafka, SQL Good to have skills : Apache Hudi, Flink, Iceberg, Azure, GCP Nomupay (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: At Nomupay, we're all about making global payments simple. Since 2021, weve been on a mission to remove complexity and help businesses expand without limits. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
10 - 12 years
30 - 35 Lacs
Hyderabad
Work from Office
Grade Level (for internal use): 11 The Team: Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: The work you do will be used every single day, its the essential code youll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. Whats in it for you: Build a career with a global company. Work on code that fuels the global financial markets. Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 10-12 years of experience designing/building data-intensive solutions using distributed computing. Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design and writing robust maintainable architectures and APIs. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient programming skills at a high-level languages -Java, Scala, Python Solid knowledge of at least one machine learning research frameworks Familiarity with containerization, scripting, cloud platforms, and CI/CD. 5+ years experience with Python, Java, Kubernetes, and data and workflow orchestration tools 4+ years experience with Elasticsearch, SQL, NoSQL,Apache spark, Flink, Databricks and Mlflow. Prior experience with operationalizing data-driven pipelines for large scale batch and stream processing analytics solutions Good to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e.g., Lucene Query Syntax) and experience with data indexing and retrieval. Experience with machine learning models and NLP techniques for search relevance and ranking. Familiarity with vector search techniques and embedding models (e.g., BERT, Word2Vec). Experience with relevance tuning using A/B testing frameworks. Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python, Java Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2