Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
35 Lacs
Surat, Gujarat, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
4.0 - 7.0 years
6 - 9 Lacs
Bengaluru
Work from Office
About the Role: Our Dashboard teams build and maintain our Web applications, which manage millions of network devices from our cloud. Our customers use the Meraki Dashboard to monitor and configure critical IT infrastructure that serves tens of millions of people every day. As a Software Engineer on MX Dashboard team, you will collaborate with firmware and other Backend/SRE/Dashboard engineers to architect, design, and build a large-scale system running MX SDWAN & Security features. You will enable connections between over a million network nodes and our SDWAN & Security customers relying on our products to serve tens of millions of people. With the large footprint that we have, quality is our highest priority. MX Dashboard team is responsible for delivering a simple to use but very powerful, scalable, and groundbreaking cloud-managed service to customers. With help from product managers and firmware engineers, you will construct intuitive but powerful systems that will be used by customers via the Meraki Dashboard. What you will work on: Solve challenging architecture problems to build scalable and extendable systems. Work with firmware engineers and PM to build intuitive and powerful workflows to handle containers. Coordinate and align knowledge and opinions between firmware, SRE, and Dashboard developers. With the help of other engineers, implement sophisticated Backend & Dashboard systems to handle MX SDWAN & Security solutions. Identify and solve performance bottlenecks in our Backend architecture. Take complete ownership from conception to production release by leveraging your ability to influence, facilitate, and work collaboratively across teams. Lead, mentor, and spread best practices to other specialists on the team. You are an ideal fit if you have: 4 + years of experience writing professional production code and tests for large scale systems 3 + years of experience in Backend & Full Stack technologies Ruby on Rails/Python/Scala/Java/NodeJS/JavaScript. Can implement efficient database design and query performance in a relational database (Postgres, SQL) Experience with Container solutions (Kubernetes) Strategic and product-oriented approach with a desire to understand users Outstanding communication skills Bonus points for any of the following: Experience or interest in Security or Networking Experience in building rich web UIs with React (and Redux) Familiarity working with Observability tools like ELK, Grafana etc is a plus
Posted 1 week ago
10.0 - 15.0 years
25 - 40 Lacs
Mumbai
Work from Office
Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 1 week ago
3.0 years
35 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Greater Lucknow Area
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Nashik, Maharashtra, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
0 years
0 Lacs
India
Remote
Description Title: ML Engineer Location: India, Remote EGNYTE YOUR CAREER. SPARK YOUR PASSION. Role Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 17,000 customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters who doers, thinkers, and collaborators are who embrace and live by our values: Invested Relationships Fiscal Prudence Candid Conversations About Egnyte Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com . The Opportunity We are looking for an experienced engineer who will help us to design, develop, and deploy machine learning & deep learning models in production, with a strong focus on NLP solutions. The core of the work will be focused on providing technical leadership for the development of NLP projects. Besides tasks associated with developing models into production, an important part of the work concerns the development of appropriate approaches and tools to ensure the professional management of our models in production. Finally, transferring knowledge, providing technical expertise to the team members, and helping shape up the team is an integral part of the job. Your Day-to-day At Egnyte Supervising the full development of machine learning & deep learning projects, from design to deployment and maintenance Providing technical leadership for the development of NLP projects Reviewing state-of-the-art machine learning & deep learning technologies/models with a strong focus on NLP Evaluating potential ML solutions and choosing the most appropriate ones depending on technical and business needs, in close collaboration with our Product team Defining the architecture of machine learning-based projects, including integrations with other Egnyte products Supporting the whole lifecycle of our machine learning models, including gathering data for (re)training, A/B testing, deployment, monitoring, retraining, and redeployments Working closely within a distributed team to analyze and apply innovative solutions over billions of documents Communicating your approach and results to a wider audience through articles and presentations About You Documented technical excellence in NLP Demonstrated success with machine learning & deep learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale Advanced communication skills, especially with regards to knowledge transfer Ability to provide mentorship and team support Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras Advanced knowledge of the HuggingFace libraries (transformers and tokenizers) or the Fairseq library Fluency in Python, Docker, Kubernetes, Helm Solid English skills to effectively communicate with other team members Bonus Skills Experience with large datasets and distributed computing, especially with the Google Cloud Platform Good understanding of advanced analytical modeling and statistical forecasting techniques Knowledge of Java, Scala or Go-Lang programming languages Familiarity with Kubeflow Experience with OpenCV Names containing “BERT” are very welcomed ;-) Benefits Competitive salaries Medical insurance and healthcare benefits for you and your family Fully paid premiums for life insurance Flexible hours and PTO Mental wellness platform subscription Gym reimbursement Childcare reimbursement Group term life insurance Commitment To Diversity, Equity, And Inclusion At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be. Show more Show less
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Bengaluru
Work from Office
About the Role: Grade Level (for internal use): 11 S&P Global Mobility The Role: Senior Data Engineer( AWS Cloud, Python) We are seeking a Senior Data Engineer with deep expertise in AWS Cloud Development to join our fast-paced data engineering organization. This role is critical to both the development of new data products and the modernization of existing platforms. The ideal candidate is a seasoned data engineer with hands-on experience designing, building, and optimizing large-scale data pipelines and architectures in both on-premises (e.g., Oracle) and cloud environments (especially AWS). This individual will also serve as a Cloud Development expert , mentoring and guiding other data engineers as they enhance their cloud skillsets. Responsibilities Data Engineering & Architecture Design, build, and maintain scalable data pipelines and data products. Develop and optimize ELT/ETL processes using a variety of data tools and technologies. Support and evolve data models that drive operational and analytical workloads. Modernize legacy Oracle-based systems and migrate workloads to cloud-native platforms. Cloud Development & DevOps (AWS-Focused) Build, deploy, and manage cloud-native data solutions using AWS services (e.g., S3, Lambda, Glue, EMR, Redshift, Athena, Step Functions). Implement CI/CD pipelines, IaC (e.g., Terraform or CloudFormation), and monitor cloud infrastructure for performance and cost optimization. Ensure data platform security, scalability, and resilience in the AWS cloud. Technical Leadership & Mentoring Act as a subject matter expert on cloud-based data development and DevOps best practices. Mentor data engineers on AWS architecture, infrastructure as code, and cloud-first design patterns. Participate in code and architecture reviews, enforcing best practices and high-quality standards. Cross-functional Collaboration Work closely with product managers, data analysts, software engineers, and other stakeholders to understand business needs and deliver end-to-end solutions. Support and evolve the roadmap for data platform modernization and new product delivery. What We're looking for: Required Qualifications 7+ years of experience in data engineering or equivalent technical role. 5+ years of hands-on experience with AWS Cloud Development and DevOps. Strong expertise in SQL , data modeling , and ETL/ELT pipelines . Deep experience with Oracle (PL/SQL, performance tuning, data extraction). Proficiency in Python and/or Scala for data processing tasks. Strong knowledge of cloud infrastructure (networking, security, cost optimization). Experience with infrastructure as code (Terraform). Familiarity with CI/CD pipelines and DevOps tooling (e.g., Jenkins, GitHub Actions). Preferred (Nice to Have) Experience with Google Cloud Platform (GCP), Snowflake Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Experience with modern orchestration tools (e.g., Airflow, dbt). Exposure to data cataloging, governance, and quality tools.
Posted 1 week ago
5.0 - 9.0 years
7 - 11 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Role: Lead Software Engineering The Team: Our team is responsible for the architecture, design, development, and maintenance of technology solutions to support the Sustainability business unit within Market Intelligence and other divisions. Our program is built on a foundation of inclusivity, enablement, and adaptability and respect which fosters an environment of open-communication and trust. We take pride in each team members accountability and responsibility to move us forward in our strategic initiatives. Our work is collaborative, we work transparently with others within our business unit and others across the entire organization. The Impact: As a Lead, Cloud Engineering at S&P Global, you will be instrumental in streamlining the software development and deployment of our applications to meet the needs of our business. Your work ensures seamless integration and continuous delivery, enhancing the platform's operational capabilities to support our business units. You will collaborate with software engineers and data architects to automate processes, improve system reliability, and implement monitoring solutions. Your contributions will be vital in maintaining high availability security and performance standards, ultimately leading to the delivery of impactful, data-driven solutions. Whats in it for you: Career Development: Build a meaningful career with a leading global company at the forefront of technology. Dynamic Work Environment: Work in an environment that is dynamic and forward-thinking, directly contributing to innovative solutions. Skill Enhancement: Enhance your software development skills on an enterprise-level platform. Versatile Experience: Gain full-stack experience and exposure to cloud technologies. Leadership Opportunities: Mentor peers and influence the products future as part of a skilled team. Key Responsibilities: Design and develop scalable cloud applications using various cloud services. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud security best practices and ensure compliance with industry standards. Monitor and optimize application performance and reliability in the cloud environment. Troubleshoot and resolve issues related to our applications and services. Stay updated with the latest cloud technologies and trends. Manage our cloud instances and their lifecycle, to guarantee a high degree of reliability, security, scalability, and confidence at any given time. Design and implement CI/CD pipelines to automate software delivery and infrastructure changes. Collaborate with development and operations teams to improve collaboration and productivity. Manage and optimize cloud infrastructure and services. Implement configuration management tools and practices. Ensure security best practices are followed in the deployment process. What Were Looking For: Bachelor's degree in Computer Science or a related field. Minimum of 10+ years of experience in a cloud engineering or related role. Proven experience in cloud development and deployment. Proven experience in agile and project management. Expertise with cloud services (AWS, Azure, Google Cloud). Experience in EMR, EKS, Glue, Terraform, Cloud security, Proficiency in programming languages such as Python, Java, Scala, Spark Strong Implementation experience in AWS services (e.g. EC2, ECS, ELB, RDS, EFS, EBS, VPC, IAM, CloudFront, CloudWatch, Lambda, S3. Proficiency in scripting languages such as Bash, Python, or PowerShell. Experience with CI/CD tools like Azure CI/CD. Experience in SQL and MS SQL Server. Knowledge of containerization technologies like Docker, Kubernetes. Nice to have - Knowledge of GitHub Actions, Redshift and machine learning frameworks Excellent problem-solving and communication skills. Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences.
Posted 1 week ago
7.0 - 12.0 years
25 - 37 Lacs
Pune
Hybrid
Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! Description: We are seeking a talented Lead Big Data Engineer to deliver roadmap features of Enterprise TruRisk Platform which would help customers to Measure, Communicate and Eliminate Cyber Risks. Working with a team of engineers and architects, you will be responsible for prototyping, designing, developing and supporting a highly scalable, distributed SaaS based Security Risk Prioritization product. This is a fantastic opportunity to be an integral part of a team building Qualys next generation platform using Big Data & Micro-Services based technology to process over billions of transactions data per day, leverage open-source technologies, and work on challenging and business-impacting initiatives. Responsibilities: Be the thought leader in data platform and pipeline along with Risk Evaluation. Provide technical leadership to the engineering organization on data platform design, roll out and evolution. Liason to product teams, professional services and sales engineers on solution and trade-off reviews and represent engineering in such conversations. Drive technology explorations and roadmaps. Serve as a technical lead on our most demanding, cross-functional departments. Ensure the quality of architecture and design of systems. Functionally decompose complex problems into simple, straight-forward solutions. Fully and completely understand system interdependencies and limitations. Possess expert knowledge in performance, scalability, enterprise system architecture, and engineering best practices. Leverage knowledge of internal and industry prior art in design decisions. Effectively research and benchmark cloud technology against other competing systems in the industry. Able to document the details so it will be easy for developers to understand the requirements. Assisting developers with proper requirements and directions. Assist in the career development of others, actively mentoring individuals and the community on advanced technical issues and helping managers guide the career growth of their team members. Exert technical influence over multiple teams, increasing their productivity and effectiveness by sharing your deep knowledge and experience. Able to share knowledge and train others. Qualifications: Bachelors degree in computer science or equivalent 8+ years of total experience. 4+ years of relevant experience in design and architecture Big Data solutions using Spark 3+ years experience in working with engineering resources for innovation. 4+ years experience in understanding Big Data events flow pipeline. 3+ years experience in performance testing for large infrastructure. 3+ In depth experience in understanding various search solutions solr/elastic. 3+ years experience in Kafka In depth experience in Data lakes and related ecosystems. In depth experience of messing queue In depth experience in giving requirements to build a scalable architecture for Big data and Micro-services environments. In depth experience in understanding caching components or services Knowledge in Presto technology. Knowledge in Airflow. Hands-on experience in scripting and automation In depth understanding of RDBMS/NoSQL, Oracle , Cassandra , Kafka , Redis, Hadoop, lambda architecture, kappa , kappa ++ architectures with flink data streaming and rule engines Experience in working with ML models engineering and related deployment. Design and implement secure big data clusters to meet many compliances and regulatory requirements. Experience in leading the delivery of large-scale systems focused on managing the infrastructure layer of the technology stack. Strong experience in doing performance benchmarking testing for Big data technologies. Strong troubleshooting skills. Experience leading development life cycle process and best practices Experience in Big Data services administration would be added value. Experience with Agile Management (SCRUM, RUP, XP), OO Modeling, working on internet, UNIX, Middleware, and database related projects. Experience mentoring/training the engineering community on complex technical issue. Project management experience
Posted 1 week ago
7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Senior Manager, Software Development Engineering Expedia revolutionize the way people search and book travel. We make travel smooth and memorable for everyone, and we create success for our travel partners. We are the Distribution and Supply team at Expedia. We own Pricing, Inventory, Reservation and Offers. We delight our Travellers with great prices for any travel (hotel, vacation rental, air, cars, packages and cruises), and support them to reserve and manage their experience. We own the unified technical systems that perform these functions, and handle trillions of events that deliver this value at Expedia’s global scale. What You Will Do Be responsible for building, growing, and shaping adaptive, hardworking, motivated teams and individuals around their goals, ownership and career. Lead, coordinate, and collaborate on multiple concurrent and complex cross-organizational initiatives, understanding goals, constraints, and perspectives, making resource, delivery, and architectural trade-offs to maximize strategic value. Lead and actively contribute to all phases of the software development lifecycle, including the design, analysis, development, and deployment efforts for multiple enterprise applications projects to tackle sophisticated business challenges. Collaborate with EG leaders with vision to architect and build robust applications and thoughtfully choose relevant technologies to evolve EG travel platform. Support technical leads and individual contributors, including coaching, ongoing training and development, performance evaluations, goal setting, disciplinary actions, recruiting, and hiring. Create a positive work environment based on accountability and inclusiveness, in partnership with peers on the leadership team. Lead by example, mentor the team, and establish credibility through quality technical execution. You will demonstrate knowledge of the product development lifecycle from idea generation to bringing a product to market by supporting the different phases and improving product performance. You will engage with peers across the organization to build an understanding of cross dependencies, priorities, and opportunities to simplify. You will advocate for operational excellence (such as unit testing, establishing SLAs, programming for resiliency and scalability). You will ensure that operational teams and subcontractors have a clear understanding of customer requirements; identify technical issues and provide data to support solutions. You will remain informed on industry trends. Examine inefficiencies in the existing stack operation and encourage engineers to improve them. You will bridge the gap in discussions between technology and non-technology personnel. Report on team status faithfully and listen for suggestions to improve lagging project work. Technologies include Java, Kotlin, Scala, Spring, Docker, Redis, DataDog, Splunk, AWS cloud Who You Are Bachelor's or master’s degree in computer science or related technical field or equivalent related professional experience 7+ years of professional, post-college software development in an object-oriented language 3+ years of people management experience with a passion for growing individual careers and enabling high-performing teams Hands-on technologist and leader well-versed at running sophisticated, multi-quarter initiatives and a broad portfolio of applications and services. Strong technical acumen and commitment to quality of engineering work and continuous improvement Excellent at switching contexts from strategic to detailed, technical to business, inter-team to cross-organization and everything in between. Strong communication skills and highly effective collaborator. You articulate your ideas to teammates, peers, and leaders, providing details and supporting your ideas with data where applicable You incorporate others' input and feedback and strive to find common ground You enjoy and take pride in the work of your people, focusing on their success and willing to go above and beyond to help them win. You take ownership of outcomes, holding yourself and your team accountable for delivering impactful results while continuously learning and improving Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. Show more Show less
Posted 1 week ago
30.0 years
0 Lacs
Gurugram, Haryana, India
On-site
The purpose of this role is to build and maintain data in the business's operational and analytics databases. The Data Engineer works with the business's software engineers, data analytics teams, data scientists. data warehouse engineers and business intelligence experts and data visualization specialists to understand and aid in the implementation of database requirements, enhance performances by data driven analysis, build reporting and BI dashboards and troubleshoot data issues. Job Description: The purpose of this role is to maintain, improve, clean and manipulate data in the business's operational and analytics databases. The Data Engineer works with the business's software engineers, data analytics teams, data scientists. data warehouse engineers and business intelligence experts and data visualization specialists to understand and aid in the implementation of database requirements, enhance performances by data driven analysis, build reporting and BI dashboards and troubleshoot data issues. Job Title: Data Engineer (Senior Analyst) Job Description: About Dentsu Led by Dentsu Group Inc. (Tokyo: 4324; ISIN: JP3551 520004), a pure holding company established on January 1, 2020, the Dentsu Group encompasses two operational networks: Dentsu Japan network, which oversees Dentsu's agency operations in Japan, and Dentsu international, its international business headquarters in London, which oversees Dentsu's agency operations outside of Japan. With a strong presence in approximately 145 countries and regions across five continents and with 65,000 dedicated professionals, the Dentsu Group provides a comprehensive range of client-centric integrated communications, media and digital services through its five leadership brands-Carat, Dentsu X, iProspect, Dentsu Creative, and Merkle-as well as through Dentsu Japan Network companies, including Dentsu Inc., the world's largest single brand agency with a history of innovation. The Group is also active in the production and marketing of sports and entertainment content on a global scale. About CXM (Merkle) Merkle is a leading data-driven customer experience management (CXM) company that specializes in the delivery of unique, personalized customer experiences across platforms and devices. For more than 30 years, Fortune 1000 companies and leading nonprofit organizations have partnered with Merkle to maximize the value of their customer portfolios. The company's heritage in data, technology, and analytics forms the foundation for its unmatched skills in understanding consumer insights that drive hyper-personalized marketing strategies. Its combined strengths in performance media, customer experience, customer relationship management, loyalty, and enterprise marketing technology drive improved marketing results and competitive advantage. With 12,000 employees, Merkle is headquartered in Columbia, Maryland, with 50+ additional offices throughout the Americas, EMEA, and APAC. Merkle is a dentsu company. Key responsibilities: Collaborate with experienced data engineers, data analyst, data strategy consultant, and other stakeholders to understand intricate customer data requirements. Design, implement, and maintain data infrastructure on Cloud to support our customer data architecture. Assembles large, complex data sets that meet functional / non-functional business requirements Identifies, designs and implements internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc Builds analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Works with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs Keeps our data separated and secure adhering to GDPR, CCPA and other Data Protection Acts Qualifications: At least 3 years of hands-on experience as a Data Engineer. Proficiency in data pipeline design, development, and optimization, drawing on your expertise in data integration, ETL/ELT, modern tools, to ensure efficient data processing and cutting-edge solutions Daily coding experience, SQL and Python preferred, in real-time and batch scenarios Demonstrated expertise in implementing data warehouse/lake solutions, data mesh architectures, and distributed processing technologies for production environments Exhibit mastery in programming languages such as SQL, Python, and PySpark/Scala/Java, leveraging them to develop sophisticated data platform engineering solutions. Dentsu Values Will live the dentsu 8 Ways at all times: We Dream Loud, We Inspire Change, We Team Without Limits, We All Lead, We Make It Real, We Climb High, We Choose Excitement, We Are A Force For Good Inclusion and Diversity We're proud to be different and that starts with our people. We believe in equal opportunities for everyone. We won't define people by their race, gender, sexual-orientation, age or disability. Individuality is what makes us great, we want everyone to bring their full self to work and create something amazing. That's what we care about. So, whether you're joining us, or looking to move to a different part of the business, we work hard to make sure we create equal opportunities for everyone. Keeping connected Please visit our website to find out more and connect with us - www.dentsu.com Location: DGS India - Pune - Baner M- Agile Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 week ago
30.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The purpose of this role is to build and maintain data in the business's operational and analytics databases. The Data Engineer works with the business's software engineers, data analytics teams, data scientists. data warehouse engineers and business intelligence experts and data visualization specialists to understand and aid in the implementation of database requirements, enhance performances by data driven analysis, build reporting and BI dashboards and troubleshoot data issues. Job Description: The purpose of this role is to maintain, improve, clean and manipulate data in the business's operational and analytics databases. The Data Engineer works with the business's software engineers, data analytics teams, data scientists. data warehouse engineers and business intelligence experts and data visualization specialists to understand and aid in the implementation of database requirements, enhance performances by data driven analysis, build reporting and BI dashboards and troubleshoot data issues. Job Title: Data Engineer (Senior Analyst) Job Description: About Dentsu Led by Dentsu Group Inc. (Tokyo: 4324; ISIN: JP3551 520004), a pure holding company established on January 1, 2020, the Dentsu Group encompasses two operational networks: Dentsu Japan network, which oversees Dentsu's agency operations in Japan, and Dentsu international, its international business headquarters in London, which oversees Dentsu's agency operations outside of Japan. With a strong presence in approximately 145 countries and regions across five continents and with 65,000 dedicated professionals, the Dentsu Group provides a comprehensive range of client-centric integrated communications, media and digital services through its five leadership brands-Carat, Dentsu X, iProspect, Dentsu Creative, and Merkle-as well as through Dentsu Japan Network companies, including Dentsu Inc., the world's largest single brand agency with a history of innovation. The Group is also active in the production and marketing of sports and entertainment content on a global scale. About CXM (Merkle) Merkle is a leading data-driven customer experience management (CXM) company that specializes in the delivery of unique, personalized customer experiences across platforms and devices. For more than 30 years, Fortune 1000 companies and leading nonprofit organizations have partnered with Merkle to maximize the value of their customer portfolios. The company's heritage in data, technology, and analytics forms the foundation for its unmatched skills in understanding consumer insights that drive hyper-personalized marketing strategies. Its combined strengths in performance media, customer experience, customer relationship management, loyalty, and enterprise marketing technology drive improved marketing results and competitive advantage. With 12,000 employees, Merkle is headquartered in Columbia, Maryland, with 50+ additional offices throughout the Americas, EMEA, and APAC. Merkle is a dentsu company. Key responsibilities: Collaborate with experienced data engineers, data analyst, data strategy consultant, and other stakeholders to understand intricate customer data requirements. Design, implement, and maintain data infrastructure on Cloud to support our customer data architecture. Assembles large, complex data sets that meet functional / non-functional business requirements Identifies, designs and implements internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc Builds analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Works with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs Keeps our data separated and secure adhering to GDPR, CCPA and other Data Protection Acts Qualifications: At least 3 years of hands-on experience as a Data Engineer. Proficiency in data pipeline design, development, and optimization, drawing on your expertise in data integration, ETL/ELT, modern tools, to ensure efficient data processing and cutting-edge solutions Daily coding experience, SQL and Python preferred, in real-time and batch scenarios Demonstrated expertise in implementing data warehouse/lake solutions, data mesh architectures, and distributed processing technologies for production environments Exhibit mastery in programming languages such as SQL, Python, and PySpark/Scala/Java, leveraging them to develop sophisticated data platform engineering solutions. Dentsu Values Will live the dentsu 8 Ways at all times: We Dream Loud, We Inspire Change, We Team Without Limits, We All Lead, We Make It Real, We Climb High, We Choose Excitement, We Are A Force For Good Inclusion and Diversity We're proud to be different and that starts with our people. We believe in equal opportunities for everyone. We won't define people by their race, gender, sexual-orientation, age or disability. Individuality is what makes us great, we want everyone to bring their full self to work and create something amazing. That's what we care about. So, whether you're joining us, or looking to move to a different part of the business, we work hard to make sure we create equal opportunities for everyone. Keeping connected Please visit our website to find out more and connect with us - www.dentsu.com Location: DGS India - Pune - Baner M- Agile Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Help shape the future of mobility. At Aptiv, we couldn’t solve mobility’s toughest challenges without our Corporate team. They ensure operations run smoothly by supporting more than 200,000 Aptiv employees and providing the direction and guidance needed as we strive to make the world safer, greener and more connected. IT Data Analytics is a diverse DevOps team of technology enthusiasts enabling our global business. Aptiv has embarked on a Data strategy that focuses on establishing a strong technology team, enterprise data management & cloud-based business solutions. Our team is charged with catalyzing value creation in the most critical areas of Aptiv’s value chain, touching our business by understanding customer demand, manufacturing implications, and our supply base. As a Data Engineer, you will design, develop and implement a cost-effective, scalable, reusable and secured Ingestion framework. You will take advantage of the opportunity to work with business leaders, various stakeholders, and source system SME’s to understand and define the business needs, translate to technical specifications, and ingest data into Google cloud platform, BigQuery. You will design and implement processes for data ingestion, transformation, storage, analysis, modelling, reporting, monitoring, availability, governance and security of high volumes of structured and unstructured data Want to join us? Your Role Pipeline Design & Implementation: Develop and deploy high-throughput data pipelines using the latest GCP technologies. Subject Matter Expertise: Serve as a specialist in data engineering and Google Cloud Platform (GCP) data technologies. Client Communication: Leverage your GCP data engineering experience to engage with clients, understand their requirements, and translate these into technical data solutions. Technical Translation: Analyze business requirements and convert them into technical specifications. Create source-to-target mappings, enhance ingestion frameworks to incorporate internal and external data sources, and transform data according to business rules. Data Cataloging: Develop capabilities to support enterprise-wide data cataloging. Security & Privacy: Design data solutions with a focus on security and privacy. Agile & DataOps: Utilize Agile and DataOps methodologies and implementation strategies in project delivery. Your Background Bachelor’s or Master’s degree in any one of the disciplines: Computer Science, Data & Analytics or similar relevant subjects. 4+ yrs years of hands-on IT experience in a similar role. Proven expertise in SQL – subqueries, aggregations, functions, triggers, Indexes, DB optimization, creating/understanding relational data-based models. Deep experience working with Google Data Products (e.g. BigQuery, Dataproc, Dataplex, Looker, Cloud data fusion, Data Catalog, Dataflow, Cloud composer, Analytics Hub, Pub/Sub, Dataprep, Cloud Bigtable, Cloud SQL, Cloud IAM, Google Kubernetes engine, AutoML). Experience in Qlik replicate , Spark (Scala/Python/Java) and Kafka. Excellent written and verbal skills to communicate technical solutions to business teams. Understanding trends, new concepts, industry standards and new technologies in Data and Analytics space. Ability to work with globally distributed teams. Knowledge of Statistical methods and data modelling knowledge. Working knowledge in designing and creating Tableau/Qlik/Power BI dashboards, Alteryx and Informatica Data Quality. Why join us? You can grow at Aptiv. Aptiv provides an inclusive work environment where all individuals can grow and develop, regardless of gender, ethnicity or beliefs. You can have an impact. Safety is a core Aptiv value; we want a safer world for us and our children, one with: Zero fatalities, Zero injuries, Zero accidents. You have support. We ensure you have the resources and support you need to take care of your family and your physical and mental health with a competitive health insurance package. Your Benefits At Aptiv Benefits/Perks: Personal holidays, Healthcare, Pension, Tax saver scheme, Free Onsite Breakfast, Discounted Corporate Gym Membership. Multicultural environment Learning, professional growth and development in a world-recognized international environment. Access to internal & external training, coaching & certifications. Recognition for innovation and excellence. Access to transportation: Grand Canal Dock is well-connected to public transportation, including DART trains, buses, and bike-sharing services, making it easy to get to and from the area. # Privacy Notice - Active Candidates: https://www.aptiv.com/privacy-notice-active-candidates Aptiv is an equal employment opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender identity, sexual orientation, disability status, protected veteran status or any other characteristic protected by law. Show more Show less
Posted 1 week ago
10.0 - 12.0 years
12 - 15 Lacs
Chennai
Work from Office
The Opportunity: This role focuses on Anthology Ally, a revolutionary product that makes digital course content more accessible. As the accessibility of digital course content becomes increasingly important worldwide, institutions must address long-standing and often overbearing challenges. Anthology s Ally engineering team is responsible for developing industry-leading tools to improve accessibility through inclusivity, sustainability, and automation for all students. As a Staff Software Engineer on our team, you will design, develop, and maintain features of the Ally product. You ll also communicate and partner cross-functionally with teams in product and software development. In this role, you will work on an ethical product, using Scala for the backend and JavaScript for the frontend. We run our applications in the AWS cloud and use Git for version control. You ll work on a distributed team, collaborating with colleagues around the globe. The Candidate: Required skills /qualifications : 10-12 years of relevant experience Good abstract and critical thinking skills Familiarity with the full-cycle development process Experience developing, building, testing, deploying, and operating applications Experience working with cloud technologies Awareness of how distributed systems work Strong command of backend programming languages (Java, JavaScript, Python, etc.) Familiarity with relational database design and querying concepts Willingness to break things and make them work again Knowledge of and experience with CI/CD principles and tools (Jenkins or Azure Pipelines) Fluency in written and spoken English Preferred skills /qualifications : Experience leading a team Command line scripting knowledge in a Linux-like environment Knowledge of cloud computing (AWS) Experience with IntelliJ IDEA (or other IDE) Experience with a version control system (Git) Experience with a bug-tracking system (JIRA) Experience with a continuous integration system and continuous delivery practices Functional programming experience such as Haskell or Scala Experience with front-end development or interest in learning (Angular).
Posted 1 week ago
7.0 years
0 Lacs
Himachal Pradesh, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 1 week ago
15.0 - 24.0 years
40 - 90 Lacs
Bengaluru
Hybrid
Key Skills: SCALA, AWS, AWS Cloud, Apache Spark, Architect, SparkSQL, Spark, Spring Boot, Java Roles and Responsibilities: Technical lead the team and project to meet deadlines. Lead efforts with team members to come up with software solutions. Optimize and maintain existing software. Recommend tech upgrades to company leaders. Build scalable, efficient, and high-performance pipelines and workflows that are capable of processing large amounts of batch and real-time data. Multidisciplinary work supporting real-time streams, ETL pipelines, data warehouses, and reporting services. Design and develop microservices and data applications that interact with other microservices. Use Big Data technologies such as Kafka, Data Lake on AWS S3, EMR, Spark, and related technologies to ingest, store, aggregate, transform, move, and query data. Follow coding best practices - Unit testing, design/code reviews, code coverage, documentation, etc. Performance analysis and capacity planning for every release. Work effectively as part of an Agile team. Bring new and innovative solutions to resolve challenging software issues as they may develop throughout the product lifecycle. Skills Required: Excellence in software design skills. Strong knowledge of design patterns, including performance optimization considerations. Proficient in writing high-quality, well-structured code in Java and Scala. Excellence in test-driven development approach and debugging software. Proficient in writing clear, concise, and organized documentation. Knowledge of Amazon cloud computing infrastructure (Aurora MySQL, DynamoDB, EMR, Lambda, Step Functions, and S3). Ability to excel in a team environment. Strong communication skills and the ability to discuss a solution with team members of varying technical sophistication. Ability to perform thoughtful and detailed code reviews, both for peers and Junior Developers. Familiarity with software engineering and project management tools. Following security protocols and best data governance practices. Able to construct KPIs and using metrics for process improvements. Minimum qualifications: 12+ years' experience in designing and developing enterprise-level software solutions. 5 years' experience developing Scala/Java applications and microservices using Spring Boot. 10 years' experience with large volume data processing and big data tools such as Apache Spark, Scala, and Hadoop technologies. 5 years' experience with SQL and Relational databases. 2 years' experience working with Agile/Scrum methodology. Education: Bachelor's Degree in related field
Posted 1 week ago
5.0 - 10.0 years
15 - 20 Lacs
Hyderabad, Chennai, Mumbai (All Areas)
Hybrid
Scala Developer: Designing, creating, and maintaining Scala-based applications Participating in all architectural development tasks related to the application. Writing code in accordance with the app requirements Performing software analysis Working as a member of a software development team to ensure that the program meets standards Application testing and debugging Making suggestions for enhancements to application procedures and infrastructure.
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
LivePerson (NASDAQ: LPSN) is the global leader in enterprise conversations. Hundreds of the world’s leading brands — including HSBC, Chipotle, and Virgin Media — use our award-winning Conversational Cloud platform to connect with millions of consumers. We power nearly a billion conversational interactions every month, providing a uniquely rich data set and safety tools to unlock the power of Conversational AI for better customer experiences. At LivePerson, we foster an inclusive workplace culture that encourages meaningful connection, collaboration, and innovation. Everyone is invited to ask questions, actively seek new ways to achieve success, and reach their full potential. We are continually looking for ways to improve our products and make things better. This means spotting opportunities, solving ambiguities, and seeking effective solutions to the problems our customers care about. Overview We are looking for an experienced Data Engineer to provide data engineering expertise and support to various analytical products of LivePerson, and assist in migrating our existing data processing ecosystem from Hadoop (Spark, MapReduce, Java, and Scala) to Databricks on GCP. The goal is to leverage Databricks’ scalability, performance, and ease of use to enhance our current workflows. You Will Assessment and Planning: Review the existing Hadoop infrastructure, including Spark and MapReduce jobs. Analyze Java and Scala codebases for compatibility with Databricks. Identify dependencies, libraries, and configurations that may require modification. Propose a migration plan with clear timelines and milestones. Code Migration: Refactor Spark jobs to run efficiently on Databricks. Migrate MapReduce jobs where applicable or rewrite them using Spark DataFrame/Dataset API. Update Java and Scala code to comply with Databricks' runtime environment. Testing and Validation: Develop unit and integration tests to ensure parity between the existing and new systems. Compare performance metrics before and after migration. Implement error handling and logging consistent with best practices in Databricks. Optimization and Performance Tuning: Fine-tune Spark configurations for performance improvements on Databricks. Optimize data ingestion and transformation processes. Deployment and Documentation: Deploy migrated jobs to production in Databricks. Document changes, configurations, and processes thoroughly. Provide knowledge transfer to internal teams if required. Required Skills 6+ years of experience in Data Engineering with focus on building data pipelines, data platforms and ETL (Extract, transform, Load) processes on Hadoop and Databricks. Strong Expertise in Databricks (Spark on Databricks, Delta Lake, etc.) preferably on GCP. Strong expertise in the Hadoop ecosystem (Spark, MapReduce, HDFS) with solid foundations of Spark and its internals. Proficiency in Scala and Java. Strong SQL knowledge. Strong understanding of data engineering and optimization techniques. Solid understanding on Data governance, Data modeling and enterprise scale data lakehouse platform Experience with test frameworks like Great Expectations Minimum Qualifications Bachelor's degree in Computer Science or a related field Certified Databricks Engineer- Preferred You Should Be An Expert In Databricks with spark and its internals (3 years) - MUST Data engineering in Hadoop ecosystem (5 years) - MUST Scala and Java (5 years) - MUST SQL - MUST Benefits Health: Medical, Dental and Vision Time away: vacation and holidays Development: Access to internal professional development resources. Equal opportunity employer Why You’ll Love Working Here As leaders in enterprise customer conversations, we celebrate diversity, empowering our team to forge impactful conversations globally. LivePerson is a place where uniqueness is embraced, growth is constant, and everyone is empowered to create their own success. And, we're very proud to have earned recognition from Fast Company, Newsweek, and BuiltIn for being a top innovative, beloved, and remote-friendly workplace. Belonging At LivePerson We are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local law. We are committed to the accessibility needs of applicants and employees. We provide reasonable accommodations to job applicants with physical or mental disabilities. Applicants with a disability who require reasonable accommodation for any part of the application or hiring process should inform their recruiting contact upon initial connection. The talent acquisition team at LivePerson has recently been notified of a phishing scam targeting candidates applying for our open roles. Scammers have been posing as hiring managers and recruiters in an effort to access candidates' personal and financial information. This phishing scam is not isolated to only LivePerson and has been documented in news articles and media outlets. Please note that any communication from our hiring teams at LivePerson regarding a job opportunity will only be made by a LivePerson employee with an @ liveperson.com email address. LivePerson does not ask for personal or financial information as part of our interview process, including but not limited to your social security number, online account passwords, credit card numbers, passport information and other related banking information. If you have any questions and or concerns, please feel free to contact recruiting-lp@liveperson.com Show more Show less
Posted 1 week ago
5.0 - 8.0 years
20 - 35 Lacs
Pune, Chennai
Work from Office
Greetings from LTIMindtree!! We are Hiring Bigdata Professionals!! Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Experience : 3 to 8yrs Key Skill : Spark+Python and Spark+Java and Spark + Scala Face to Face Location : Pune, Chennai JD 1: Mandatory Skills: Hadoop-Spark SparkSQL Java 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio JD 2: Mandatory Skills: Hadoop-Spark SparkSQL Python Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues JD 3: Mandatory Skills: Hadoop-Spark SparkSQL Scala Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills
Posted 1 week ago
5.0 - 8.0 years
20 - 35 Lacs
Pune, Chennai, Bengaluru
Hybrid
Greetings from LTIMindtree!! We are Hiring Bigdata Professionals!! Experience : 3 to 8yrs Key Skill : Spark+Python and Spark+Java and Spark + Scala Face to Face Location : Pune, Chennai Interested Candidate kindly share your resume and apply in below link https://forms.office.com/r/zQucNTxa2U JD 1: Hadoop-Spark SparkSQL Java Skills needed: 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio JD 2: Hadoop-Spark SparkSQL Python Mandatory Skills: Relevant Experience in ETL and Data Engineering Strong Knowledge in Spark, Python Strong experience in Hive/SQL, PL/SQL Good Understanding of ETL & DW Concepts, Unix Scripting Design, implement and maintain Dat Pipeline to meet business requirements. Convert the Business need into Technical complex PySpark Code. Ability to write complex SQL queries for reporting purpose. Monitor Pyspark code performance and troubleshoot issues JD 3: Hadoop-Spark SparkSQL Scala Experience in Scala programming languages Experience in Big Data technologies including Spark Scala and Kafka Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines.Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options.Who have hands on experience working with large volumes of data including different patterns of data ingestion processing batch realtime movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project Who have a good understanding of data structures and algorithms Who can test debug and fix issues within established SLAs Who can design software that is easily testable and observable Who understand how teams goals fit a business need Who can identify business problems at the project level and provide solutions Who understand data access patterns streaming technology data validation data performance cost optimization Strong SQL skills
Posted 1 week ago
8.0 years
0 Lacs
India
On-site
The Data and Common Services (DCS) team within the Yahoo Advertising Engineering organization is responsible for the Advertising core data infrastructure and services that provide common, horizontal services for user and contextual targeting, privacy and analytics. We are looking for a talented junior or mid level engineer who can design, implement, and support robust, scalable and high quality solutions related to Advertising Targeting, Identity, Location and Trust & Verification. As a member of the team, you will be helping our Ad platforms to deliver highly accurate and relevant Advertising experience for our consumers and for the web at large. Job Location: Hyderabad (Hybrid Work Model) Job Description Design and code backend Java applications and services. Emphasis is placed on implementing maintainable, scalable, systems capable of handling billions of requests per day. Analyze business and technical requirements and design solutions that meet those needs. Collaborate with project managers to develop and clarify requirements Work with Operations Engineers to ensure applications are operations ready and able to be effectively monitored using automated methods Troubleshoot production issues related to the team’s applications. Effectively manage day-to-day tasks to meet scheduled commitments. Be able to work independently. Collaborate with programmers both on their team and on other teams Skills And Education B.Tech/BE in Computer Science or equivalent technical discipline 8+ years of experience designing and programming in a Unix/Linux environment Excellent written and verbal communication skills, e.g., the ability to explain the work in plain language Experience delivering innovative, customer-centric products at high scale Technical with a track record of successful delivery as individual contributor Experience with building robust, scalable, distributed services Execution experience in fast-paced environments and performance driven culture Experience with big data technologies, such as Spark, Hadoop, and Airflow Knowledge of CI/CD and DevOps tools and processes Strong programming skills in Java, Python, or Scala Solid understanding of RDBMS and general database concepts Must have extensive technical knowledge and experience with distributed systems Must have strong programming, testing, and troubleshooting skills. Experience in public cloud such as AWS. Important notes for your attention Applications: All applicants must apply for Yahoo openings direct with Yahoo. We do not authorize any external agencies in India to handle candidates’ applications. No agency nor individual may charge candidates for any efforts they make on an applicant’s behalf in the hiring process. Our internal recruiters will reach out to you directly to discuss the next steps if we determine that the role is a good fit for you. Selected candidates will go through formal interviews and assessments arranged by Yahoo direct. Offer Distributions: Our electronic offer letter and documents will be issued through our system for e-signatures, not via individual emails. Yahoo is proud to be an equal opportunity workplace. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. Yahoo will consider for employment qualified applicants with criminal histories in a manner consistent with applicable law. Yahoo is dedicated to providing an accessible environment for all candidates during the application process and for employees during their employment. If you need accessibility assistance and/or a reasonable accommodation due to a disability, please submit a request via the Accommodation Request Form (www.yahooinc.com/careers/contact-us.html) or call +1.866.772.3182. Requests and calls received for non-disability related issues, such as following up on an application, will not receive a response. Yahoo has a high degree of flexibility around employee location and hybrid working. In fact, our flexible-hybrid approach to work is one of the things our employees rave about. Most roles don’t require specific regular patterns of in-person office attendance. If you join Yahoo, you may be asked to attend (or travel to attend) on-site work sessions, team-building, or other in-person events. When these occur, you’ll be given notice to make arrangements. If you’re curious about how this factors into this role, please discuss with the recruiter. Currently work for Yahoo? Please apply on our internal career site. Show more Show less
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity The objective of our Digital Risk Consulting service is to support clients with the development, implementation, improvement, and modernization of their technology risk and compliance programs to address the constantly changing risk and technology landscape. Our solutions can be used by our clients to build confidence and trust with their customers, the overall market, and when required by regulation or contract. Your Key Responsibilities You will operate as a team leader for engagements to help our clients develop and strengthen their IT risk and compliance programs. You will work directly with clients to review their IT processes and controls, remediate and implement controls, onboard new tools and services into risk and compliance frameworks, and assist with the readiness and adherence for new compliance regulations. Your responsibilities include both in-person and remote oversight and coaching of engagement team members, reporting to both senior engagement team members and client leadership, as well as partnering with our key client contacts to complete the engagement work. What You'll Do Designing and implementing solutions to various data related technical/compliance challenges such as DevSecOps, data strategy, data governance, data risks & relevant controls, data testing, data architecture, data platforms, data solution implementation, data quality and data security to manage and mitigate risk. Leveraging data analytics tools/software to build robust and scalable solutions through data analysis and data visualizations using SQL, Python and visualization tools Design and implement comprehensive data analytics strategies to support business decision-making. Collect, clean, and interpret large datasets from multiple sources, ensuring completeness, accuracy and integrity of data. Integrating and/or piloting next-generation technologies such as cloud platforms, machine learning and Generative AI (GenAI) Developing custom scripts and algorithms to automate data processing and analysis to generate insights Applying business / domain knowledge including regulatory requirements and industry standards to solve complex data related challenges Analyzing data to uncover trends and generate insights that can inform business decisions Build and maintain relationships across Engineering, Product, Operations, Internal Audit, external audit and other external stakeholders to drive effective financial risk management. Work with DevSecOps, Security Assurance, Engineering, and Product teams to improve efficiency of control environments and provide risk management through implementation of automation and process improvement Bridge gaps between IT controls and business controls, including ITGCs and automated business controls. Work with IA to ensure complete control environment is managed Work with emerging products to understand risk profile and ensure an appropriate control environment is established Implement new process and controls in response to changes to the business environment, such as new product introduction, changes in accounting standards, internal process changes or reorganization. What You'll Need Experience in data architecture, data management, data engineering, data science or data analytics Experience in building analytical queries and dashboards using SQL, noSQL, Python etc Proficient in SQL and quantitative analysis, you can deep dive into large amounts of data, draw meaningful insights, dissect business issues and draw actionable conclusions Knowledge of tools in the following areas: Scripting and Programming (e.g., Python, SQL, R, Java, Scala, etc) Big Data Tools (e.g., Hadoop, Hive, Pig, Impala, Mahout, etc) Data Management (e.g., Informatica, Collibra, SAP, Oracle, IBM etc) Predictive Analytics (e.g., Python, IBM SPSS, SAS Enterprise Miner, RPL, Matl, etc) Data Visualization (e.g., Tableau, PowerBI, TIBCO-Spotfire, CliqView, SPSS, etc) Data Mining (e.g., Microsoft SQL Server, etc) Cloud Platforms (e.g., AWS, Azure, or Google Cloud) Ability to analyze complex processes to identify potential financial, operational, systems and compliance risks across major finance cycles Ability to assist management with the integration of security practices in the product development lifecycle (DevSecOps) Experience with homegrown applications in a microservices/dev-ops environment Experience with identifying potential security risks in platform environments and developing strategies to mitigate them Experience with SOX readiness assessments and control implementation Knowledge of DevOps practices, CI/CD pipelines, code management and automation tools (e.g., Jenkins, Git, Phab, Artifactory, SonarQube, Selenium, Fortify, Acunetix, Prisma Cloud) Preferred: Experience in: Managing technical data projects Leveraging data analytics tools/software to develop solutions and scripts Developing statistical model tools and techniques Developing and executing data governance frameworks or operating models Identifying data risks and designing and/or implementing appropriate controls Implementation of data quality process Developing data services and solutions in a cloud environment Designing data architecture Analyzing complex data sets & communicating findings effectively Process management experience, including process redesign and optimization Experience in scripting languages (e.g., Python, Bash) Experience in cloud platforms (e.g., AWS, Azure, GCP) and securing cloud-based applications/services To qualify for the role, you must have A bachelor's or master's degree 1-3 years of experience working as an IT risk consultant or data analytics experience. Bring your experience in applying relevant technical knowledge in at least one of the following engagements: (a) risk consulting, (b) financial statement audits; (c) internal or operational audits, (d) IT compliance; and/or (e) Service Organization Controls Reporting engagements. We would expect for you to be available to travel outside of their assigned office location at least 50% of the time, plus commute within the region (where public transportation often is not available). Successful candidates must work in excess of standard hours when necessary. A valid passport is required. Ideally, you’ll also have A bachelor's or master's degree in business, computer science, information systems, informatics, computer engineering, accounting, or a related discipline CISA, CISSP, CISM, CPA or CA certification is desired; non-certified hires are required to become certified to be eligible for promotion to Manager. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2