Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
35 Lacs
Surat, Gujarat, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Overview We are seeking a skilled Data Engineer to join our team. The successful candidate will be responsible for maintaining and optimizing data pipelines, implementing robust data checks, and ensuring the accuracy and integrity of data flows. This role is critical in supporting data-driven decision-making processes, especially in the context of our insurance-focused business operations. Key Responsibilities Data Collection and Acquisition: Source Identification, Data Licensing and Compliance, Data Crawling/Collection Data Preprocessing and Cleaning: Data Cleaning, Text Tokenization, Normalization, Noise Filtering Data Transformation and Feature Engineering: Text Embedding, Text Augmentation, Handling Multilingual Data Data Pipeline Development: Scalable Pipelines, ETL Processes, Automation Data Storage and Management: Data Warehousing, Database Optimization, Version Control Collaboration with Data Scientists and ML Engineers: Data Accessibility, Support for Model Development, Data Quality Assurance Performance Optimization and Scaling: Efficient Data Handling, Distributed Computing Data Security and Privacy: Data Anonymization, Compliance with Regulations Documentation and Reporting: Data Pipeline Documentation, Reporting Candidate Profile 6 -10 years of relevant experience in data engineering tools Tools: Data Processing & Storage: Apache Spark, Apache Hadoop, Apache Kafka, Google BigQuery, AWS S3, Databricks Machine Learning Frameworks: TensorFlow, PyTorch, Hugging Face Transformers, scikit-learn Data Pipelines & Automation: Apache Airflow, Kubeflow, Luigi Version Control & Collaboration: Git, DVC (Data Version Control) Data Extraction: BeautifulSoup, Scrapy, APIs (RESTful, GraphQL) What We Offer EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary As a Senior Data Scientist specializing in NLP, Generative AI, and Cloud technologies, you will be responsible for driving the development of data extraction solutions from documents at scale. This role requires advanced technical expertise in machine learning, NLP, and cloud computing, with a focus on automating document understanding processes and enhancing the quality of data extraction through state-of-the-art techniques. You will lead the design, implementation, and deployment of scalable NLP and AI models, mentor junior data scientists, and work collaboratively with cross-functional teams to deliver innovative solutions. This is a strategic role that requires both deep technical knowledge and leadership capabilities to shape the future of document data extraction within the organization. Key Responsibilities Lead Data Extraction Solutions: Design, implement, and scale advanced NLP and machine learning models for automating the extraction of structured data from a wide range of unstructured documents (e.g., PDFs, scanned images, contracts, reports, etc.). Generative AI Expertise: Leverage Generative AI models (such as GPT, BERT, and related architectures) for tasks such as document summarization, content generation, and enhancing extracted data. Cloud-Based Deployment: Architect and deploy data extraction models and workflows in cloud environments (AWS, Azure, GCP), ensuring scalability, reliability, and cost-efficiency. Model Development & Optimization: Develop and fine-tune machine learning and NLP models, ensuring high performance in accuracy, efficiency, and robustness for real-world data extraction tasks. Data Pipeline Design: Build and optimize end-to-end data pipelines, including data preprocessing, feature engineering, and model deployment, to process large-scale document datasets in the cloud. Cross-Functional Collaboration: Work closely with product, engineering, and business teams to understand requirements, provide technical solutions, and deliver impactful data-driven results. Research & Innovation: Stay up-to-date with the latest advancements in NLP, machine learning, and AI, applying cutting-edge research to improve data extraction methodologies. Mentorship & Leadership: Lead and mentor a team of junior data scientists, providing guidance on best practices, model development, and cloud deployment. Model Monitoring & Maintenance: Establish systems for monitoring model performance in production and ensure models are maintained and updated based on new data or changing requirements. Compliance & Security: Ensure data processing and extraction workflows adhere to industry standards, data privacy regulations, and security protocols, particularly when working with sensitive information. Required Skills & Qualifications Experience: Minimum 8 years of experience as a Data Scientist or similar role, with a focus on NLP, machine learning, and AI. At least 3 years in a senior or lead capacity. NLP & Document Processing Expertise: Proven experience applying NLP techniques such as Named Entity Recognition (NER), Optical Character Recognition (OCR), information extraction, document classification, and semantic analysis for data extraction from unstructured text. Generative AI: Advanced knowledge of Generative AI models (e.g., GPT-3, BERT, T5) and experience applying them to real-world document and text processing tasks. Cloud Technologies: Extensive experience with cloud platforms (AWS, Azure, or GCP) for deploying data pipelines, managing machine learning models, and processing large datasets. Programming Skills: Proficiency in Python and libraries such as SpaCy, Hugging Face Transformers, TensorFlow, PyTorch, and scikit-learn. Data Pipeline & DevOps Tools: Hands-on experience with building, optimizing, and deploying data pipelines in cloud environments, including tools like Docker, Kubernetes, Apache Airflow, and MLFlow. Data Handling & Analysis: Expertise in data manipulation and analysis using tools such as Pandas, NumPy, and SQL, and ability to work with large datasets. Leadership & Communication: Strong leadership and mentoring abilities, with excellent written and verbal communication skills to explain complex technical concepts to non-technical stakeholders. Problem Solving: Exceptional problem-solving skills with a creative approach to tackling challenges related to document data extraction. Collaboration: Experience working in a collaborative, cross-functional team environment to deliver end-to-end solutions. Preferred Qualifications Advanced Degree: Master’s or PhD in Computer Science, Data Science, Artificial Intelligence, or a related field. Advanced NLP Techniques: Experience with state-of-the-art NLP methods such as transfer learning, attention mechanisms, and reinforcement learning applied to document data extraction. Compliance Experience: Familiarity with legal, financial, or healthcare industry regulations regarding data privacy and document processing. Industry Experience: Previous experience in industries such as finance, legal, healthcare, or other sectors that heavily rely on document data extraction. Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Greater Lucknow Area
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Job Description: Proficiency with Microsoft Excel, Access, PowerPoint, Qliksense and SQL required. Develop & Maintain Qlik Sense Solutions : Design, develop, and manage interactive dashboards, reports, and applications using Qlik Sense. Data Modeling & Governance : Build and maintain data models to ensure accuracy, consistency, and integrity in reporting. SQL Development : Write and troubleshoot complex SQL queries for data extraction, transformation, and analysis. Qlik Sense Administration : Manage Qlik Sense environments, ensuring optimal performance, security, and access control. Requirement Gathering : Work closely with business stakeholders to understand requirements and translate them into BI solutions. Automation & Reporting : Implement automated reporting solutions using NPrinting and alerting features to improve efficiency. Agile & Kanban Execution : Lead BI projects using Agile methodologies, ensuring timely delivery and iterative improvements. Training & Mentorship : Conduct user training sessions, support business teams in utilizing Qlik Sense effectively, and mentor junior analysts. Collaboration with Leadership : Engage with technical and business leaders to refine BI solutions and enhance data-driven : 3-6 years of experience in Qlik Sense development and administration. Expertise in Qlik Sense with a strong understanding of data visualization and BI best practices. Strong SQL skills for query development and troubleshooting. Deep understanding of data modeling, data governance, and data warehousing concepts. Experience working in Agile environments (Kanban preferred). Ability to gather business requirements and translate them into actionable BI solutions. Excellent problem-solving and analytical skills with an innovative mindset. Strong communication skills to collaborate with business and technical teams effectively. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 3-6 years of experience in QlikSense development and data visualization, preferably within the manufacturing sector. Strong proficiency in data modeling, scripting, and data integration within QlikSense. Experience with SQL and relational databases, particularly those related to manufacturing data. Solid understanding of data warehousing concepts and business intelligence tools. Excellent analytical and problem-solving skills, with the ability to translate procurement data into insights. Strong communication and interpersonal skills to work effectively with stakeholders in production, operations, and supply chain. Ability to manage multiple projects and deliver results within deadlines. Mandatory Skill Sets: ‘Must have’ knowledge, skills and experiences MS Excel, Qliksense, SQL Preferred Skill Sets: ‘Good to have’ knowledge, skills and experiences Statistical analysis, SAP Analytics. Years Of Experience Required: 6 to 9 years relevant experience Education Qualification: BE, B.Tech, ME, M,Tech, MBA, MCA (60% above Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Nashik, Maharashtra, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
35 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 3.00 + years Salary : INR 3500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NA) (*Note: This is a requirement for one of Uplers' client - Nomupay) What do you need for this opportunity? Must have skills required: Apache Hudi, Flink, Iceberg, Apache Airflow, Spark, AWS, Azure, GCP, Kafka, SQL Nomupay is Looking for: 📈 Opportunity in a company with a solid track record of performance 🤝 Opportunity to work with diverse, global teams 🚀 Rapid career advancement with opportunities to learn 💰 Competitive salary and Performance bonus Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. Founders Peter Burridge, CEO Investor, board member, and strategic executive, Peter has more than 30 years of management and leadership experience at rapid growth technology companies. His unique hands-on approach to business development and corporate governance has made him a trusted advisor and authority in the enterprise software industry and the financial technology sector. As President of Hyperwallet, Peter guided the organization through a successful recapitalization, followed by global expansion and the ultimate sale of the business to PayPal. Peter is a recognizable figure in the San Francisco fintech community and global payments industry. Peter has previously served in leadership roles at Oracle, Siebel, Travelex Global Business Payments, and as an investor and advisor in the technology sector. Outside the office, Peter’s passions include racing cars, golf and rugby union. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
India
Remote
Role: AWS Data Engineer Experience: 12+ years JD for AWS (DE) Experience in EMR Knowledge of Python Programing language Knowledge of Data Processing using Pandas Library Extensive knowledge of processing csv, excel, json, yaml files using python Good knowledge of Big data technologies : Pyspark, Hadoop, Hive Knowledge of AWS services: S3, Lambda, Redshift, Glue Hands on experience on Apache Airflow for building workflows Knowledge of building ETL and Data pipelines Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Key Responsibilities Collaborate with the AI/ML Platform Enablement team within the eCommerce Analytics division to support strategic transformation initiatives. Develop, maintain, and scale production-grade ML models and pipelines using MLOps best practices. Build and orchestrate scalable data workflows using Apache Airflow. Design and implement complex ETL pipelines using Python and PySpark for large-scale data processing. Lead the deployment and configuration of Kubernetes clusters, including API Gateway, Ingress, Model Serving, Monitoring, and Cron Jobs. Contribute hands-on and lead architectural design for scalable and resilient AI/ML platforms. Partner with domain experts and IT leaders to drive cross-functional initiatives and ensure seamless integration of AI solutions. Identify opportunities for system optimization, resource efficiency, and operational excellence. Develop and maintain robust observability practices using tools like Prometheus, Grafana, and Splunk. Required Skills & Experience Hands-on experience with: Python, PySpark, and Apache Airflow for data engineering and pipeline orchestration GenAI, LLMs, and advanced NLP model development MLOps pipeline design and orchestration Docker, Kubeflow, and Kubernetes (GKE/EKS/AKS or on-prem) GCP (Google Cloud Platform) Web development frameworks (e.g., Flask, FastAPI) Strong knowledge of machine learning algorithms (parameterized & non-parameterized). Experience developing containerized ML components and scalable ML workflows. Familiarity with observability tools such as Prometheus, Grafana, and Splunk. Proven ability to troubleshoot and diagnose complex system-level issues. Excellent communication and stakeholder management skills. Education & Qualifications Bachelor’s or Master’s degree in a quantitative field (Computer Science, Engineering, Math, Economics, or similar). 6+ years of professional experience in data science and ML in cloud-based distributed environments. Demonstrated success leading cross-functional AI/ML initiatives at scale. Skills Python,Pyspark,Airflow Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Who We Are Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. Who You Are As a seasoned Data Engineer specializing in data engineering, you bring extensive expertise in optimizing data workflows using various database tools like Oracle, BigQuery, and SQL Server. You possess a deep understanding of ELT/ETL processes, data integration, and have a strong command of Python for data manipulation and automation tasks. You will possess advanced expertise in working with data platforms like Google Big Query, DBT, Python, and Airflow. Responsible for designing and maintaining scalable ETL pipelines, optimizing complex data systems, and ensuring smooth data flow across different platforms. As a Senior Data Engineer, you will also be required to work collaboratively in a team and contribute to building data infrastructure that drives business insights What You’ll Do Design, develop, and optimize complex ETL pipelines that integrate large data sets from various sources. Build and maintain high-performance data models using Google BigQuery and DBT for data transformation. Develop Python scripts for data ingestion, transformation, and automation. Implement and manage data workflows using Apache Airflow for scheduling and orchestration. Collaborate with data scientists, analysts, and other stakeholders to ensure data availability, reliability, and performance. Troubleshoot and optimize data systems, identifying issues and resolving them proactively. Work on cloud-based platforms, particularly AWS, to leverage scalability and storage options for data pipelines. Ensure data integrity, consistency, and security across systems. Take ownership of end-to-end data engineering tasks while mentoring junior team members. Continuously improve processes and technologies for more efficient data processing and delivery. Act as a key contributor to developing and supporting complex data architectures. What You’ll Need Bachelor’s degree in computer science, Information Technology, or a related field. 6+ years of hands-on experience in Data Engineering or related fields, with a strong background in building and optimizing data pipelines Strong proficiency in Google Big Query, including designing and optimizing queries. Advanced knowledge of DBT for data transformation and model management. Proficiency in Python for data engineering tasks, including scripting, data manipulation, and automation. Solid experience with Apache Airflow for workflow orchestration and task automation. Extensive experience in building and maintaining ETL pipelines. Familiarity with cloud platforms, particularly AWS (Amazon Web Services), including tools like S3, Lambda, Redshift, or Glue. Java knowledge is a plus. Excellent problem-solving and troubleshooting abilities. Strong communication and collaboration skills with the ability to work effectively in a team environment. Self-motivated, detail-oriented, and able to work with minimal supervision. Ability to manage multiple priorities and deadlines in a fast-paced environment. Experience with other cloud platforms (e.g., GCP, Azure) is a plus. Knowledge of data warehousing best practices and architecture. WHAT’S IN IT FOR YOU? At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Himachal Pradesh, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
As Europe's fastest-growing unicorn, we revolutionize eCommerce globally. Through strategic acquisitions, scaling initiatives, and cutting-edge technology, we're the top player in the industry. Following our recent acquisition of Perch - the leading US aggregator - and a successful Series D funding round led by Presight Captial, we’re aimed toward a $1 billion top-line business! Your Role We are seekin g a Data Analytics & Business Intelligence Lead with deep expertise in analytics, data warehousing, and cross-functional reporting. This role is critical to shaping and driving our data strategy across all major functions—including Finance, Supply Chain, and Revenue/Growth—within our e-commerce ecosystem. The ideal candidate will own the end-to-end analytics and reporting lifecycle, delivering actionable insights that directly influence strategic decisions and operational outco mes. Your responsibilities will include: Cross-Functional Business Partnership : Engage with stakeholders across all key functions to understand business objectives, identify opportunities, and translate them into analytics and BI solutions that drive impact . End-to-End Reporting Ownership : Lead the design and delivery of reporting frameworks, dashboards, and performance metrics that provide visibility into business performance and facilitate data-driven decisions. Data Modelling & Warehousing : Design robust SQL-based data models on Redshift to support scalable and reliable analytics infrastructure. Business Analytics Expertise : Use statistical and analytical techniques to derive insights that inform pricing, inventory, customer behaviour, revenue optimization, and supply chain efficiency. Team Leadership: Build and mentor a high-performing analytics and BI team, fostering a culture of collaboration, ownership, and continuous improvement. AI/ML Integration : Collaborate with data science teams to operationalize machine learning models into business workflows and reporting systems. Stakeholder Collaboration: Work cross-functionally with product, marketing, operations, and finance teams to identify key metrics, define KPIs, and deliver impactful analytical solutions. Data Governance & Quality: Champion data accuracy, consistency, and integrity in all analytical products and drive best practices for BI development and data visualization. Your Profile To succeed in this role, you: Have a strong analytics background with the ability to translate complex data into clear business recommendations that drive measurable outcomes. Possess 7+ years of experience delivering business intelligence and analytics solutions across multiple functions in a data-driven organization, preferably in e-commerce or retail. Have implemented cloud-based data warehouse solutions on platforms like AWS (Redshift), GCP or Azure. Bring 3+ years of experience leading cross-functional data or analytics teams, with a track record of building scalable reporting and data solutions. Are highly proficient in SQL and comfortable working with large, complex datasets. Have hands-on experience in production-grade analytics environments, including version control (GitHub), Docker, and CI/CD pipelines. Possess excellent problem-solving skills and a proactive, ownership-driven mindset. Excel at communicating complex findings to non-technical stakeholders and influencing strategic decisions. Preferred Qualifications Prior experience in a high-growth, fast-paced e-commerce or technology environment. Exposure to modern BI tools (e.g. Tableau, Power BI) and metric governance. Proficiency with Redshift, dbt, and workflow orchestration tools such as Airflow About Razor Group We are revolutionizing the e-commerce world, reaching over $1 billion in value and over $700 million in revenue, with the backing of top investors like Black Rock, VPC, and Fortress, Apollo, 468 Capital, Rocket Internet. Along with Perch and our previous acquisitions of Factory14 in Spain, Valoreo in Latin America, and our German competitor Stryze, we now operate a catalogue of over 40,000 products across 3 continents and 10+ countries. Headquartered in Berlin, we are also present in Austin, Boston, Delhi NCR, Hangzhou, and Mexico City! Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Senior Data Engineer is responsible for design, develop and support ETL data pipelines solutions primary in AWS environment Design, develop, and maintain scaled ETL process to deliver meaningful insights from large and complicated data sets. Work as part of a team to build out and support data warehouse, implement solutions using PySpark to process structured and unstructured data. Play key role in building out a semantic layer through development of ETLs and virtualized views. Collaborate with Engineering teams to discovery and leverage new data being introduced into the environment Support existing ETL processes written in SQL, or leveraging third party APIs with Python, troubleshoot and resolve production issues. Strong SQL and data to understand and troubleshoot existing complex SQL. Hands-on experience with Apache Airflow or equivalent tools (AWS MWAA) for orchestration of data pipelines Create and maintain report specifications and process documentations as part of the required data deliverables. Serve as liaison with business and technical teams to achieve project objectives, delivering cross functional reporting solutions. Troubleshoot and resolve data, system, and performance issues Communicating with business partners, other technical teams and management to collect requirements, articulate data deliverables, and provide technical designs. Qualifications you have completed graduation from BE/Btech 6 to 9 years of experience in Data Engineering development 5 years of experience in Python scripting You should have 8 years experience in SQL, 5+years in Datawarehouse, 5yrs in Agile and 3yrs with Cloud 3 years of experience with AWS ecosystem (Redshift, EMR, S3, MWAA) 5 years of experience in Agile development methodology You will work with the team to create solutions Proficiency in CI/CD tools (Jenkins, GitLab, etc.) Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer the best family well-being benefits, Enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
India
On-site
The Data and Common Services (DCS) team within the Yahoo Advertising Engineering organization is responsible for the Advertising core data infrastructure and services that provide common, horizontal services for user and contextual targeting, privacy and analytics. We are looking for a talented junior or mid level engineer who can design, implement, and support robust, scalable and high quality solutions related to Advertising Targeting, Identity, Location and Trust & Verification. As a member of the team, you will be helping our Ad platforms to deliver highly accurate and relevant Advertising experience for our consumers and for the web at large. Job Location: Hyderabad (Hybrid Work Model) Job Description Design and code backend Java applications and services. Emphasis is placed on implementing maintainable, scalable, systems capable of handling billions of requests per day. Analyze business and technical requirements and design solutions that meet those needs. Collaborate with project managers to develop and clarify requirements Work with Operations Engineers to ensure applications are operations ready and able to be effectively monitored using automated methods Troubleshoot production issues related to the team’s applications. Effectively manage day-to-day tasks to meet scheduled commitments. Be able to work independently. Collaborate with programmers both on their team and on other teams Skills And Education B.Tech/BE in Computer Science or equivalent technical discipline 8+ years of experience designing and programming in a Unix/Linux environment Excellent written and verbal communication skills, e.g., the ability to explain the work in plain language Experience delivering innovative, customer-centric products at high scale Technical with a track record of successful delivery as individual contributor Experience with building robust, scalable, distributed services Execution experience in fast-paced environments and performance driven culture Experience with big data technologies, such as Spark, Hadoop, and Airflow Knowledge of CI/CD and DevOps tools and processes Strong programming skills in Java, Python, or Scala Solid understanding of RDBMS and general database concepts Must have extensive technical knowledge and experience with distributed systems Must have strong programming, testing, and troubleshooting skills. Experience in public cloud such as AWS. Important notes for your attention Applications: All applicants must apply for Yahoo openings direct with Yahoo. We do not authorize any external agencies in India to handle candidates’ applications. No agency nor individual may charge candidates for any efforts they make on an applicant’s behalf in the hiring process. Our internal recruiters will reach out to you directly to discuss the next steps if we determine that the role is a good fit for you. Selected candidates will go through formal interviews and assessments arranged by Yahoo direct. Offer Distributions: Our electronic offer letter and documents will be issued through our system for e-signatures, not via individual emails. Yahoo is proud to be an equal opportunity workplace. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. Yahoo will consider for employment qualified applicants with criminal histories in a manner consistent with applicable law. Yahoo is dedicated to providing an accessible environment for all candidates during the application process and for employees during their employment. If you need accessibility assistance and/or a reasonable accommodation due to a disability, please submit a request via the Accommodation Request Form (www.yahooinc.com/careers/contact-us.html) or call +1.866.772.3182. Requests and calls received for non-disability related issues, such as following up on an application, will not receive a response. Yahoo has a high degree of flexibility around employee location and hybrid working. In fact, our flexible-hybrid approach to work is one of the things our employees rave about. Most roles don’t require specific regular patterns of in-person office attendance. If you join Yahoo, you may be asked to attend (or travel to attend) on-site work sessions, team-building, or other in-person events. When these occur, you’ll be given notice to make arrangements. If you’re curious about how this factors into this role, please discuss with the recruiter. Currently work for Yahoo? Please apply on our internal career site. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
Job Title : Senior Data Engineer – Digital Marketing Domain (Qlik Sense) Location : Remote (Work From Home) Shift Timing : EST Notice Period : Immediate Joiners Preferred Experience : 4+ Years Employment Type : Full-Time Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer with a strong background in Digital Marketing data ecosystems and hands-on expertise in Qlik Sense. The ideal candidate will have a proven track record in designing and developing scalable data pipelines, working with cloud data warehousing (especially Snowflake), and enabling data visualization and analytics for cross-functional teams. Key Responsibilities : Support and enhance our data engineering practices across the organization. Design, develop, and maintain scalable ETL/ELT pipelines using modern orchestration tools (e.g., Dagster, Airflow). Lead migration efforts from legacy systems to Snowflake, ensuring performance and scalability. Integrate and manage data from various digital marketing platforms (e.g., Google Ads, Meta Ads, GA4, etc.). Collaborate closely with data analysts and stakeholders to deliver data solutions that support business goals. Ensure data quality, lineage, governance, and security across pipelines and platforms. Develop and optimize Qlik Sense dashboards to visualize marketing and performance data effectively. Required Qualifications: Minimum of 4 years of experience in data engineering roles. Strong hands-on experience with Snowflake or similar cloud data warehousing platforms. Proficiency in SQL and Python for data manipulation and pipeline development. Experience with data orchestration tools such as Dagster, Apache Airflow, or similar. Solid understanding of data modeling and building robust, scalable ETL/ELT workflows. Experience in digital marketing data sources and domain-specific data processing. Familiarity with Qlik Sense or similar data visualization platforms. Excellent communication, analytical thinking, and problem-solving skills. Ability to work effectively in a remote, cross-functional team environment aligned to EST hours. Show more Show less
Posted 1 week ago
9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Clearwater's mission is to be the world's most trusted and comprehensive technology platform that simplifies the entire investment lifecycle. We empower our clients to run efficient investment accounting operations, provide an auditable SaaS platform for integrated investment accounting, analytics, and reporting, foster a diverse and collaborative culture of innovation and excellence, and contribute to our local communities to make a meaningful impact on society. As a Staff Data Engineer at Clearwater, you will play a crucial role in accomplishing our mission. You will be a leading member of the Prism team, which is responsible for managing our external BI reporting platform and the data aggregation pipeline backing it. This team’s product serves as a key data hub for multiple crucial products that rely on Prism Data. In this role, you will Lead the design and execution of scalable data architecture strategy, ensuring alignment with business objectives, operational maturity, and long-term maintainability of systems (e.g., warehouses, lakes, pipelines). Collaborate cross-functionally with data scientists, analysts, and business stakeholders to translate requirements into reliable, high-quality data solutions that drive decision-making. Champion end-to-end ownership of critical data initiatives, driving multi-team, multi-sprint projects from conception to delivery while balancing technical risk and timelines. Design and optimize robust data pipelines, ensuring efficient ingestion, transformation, storage, and retrieval while adhering to security, privacy, and compliance standards. Mentor engineers of all levels, fostering a culture of knowledge-sharing and operational excellence across teams; act as a trusted technical advisor in ambiguous or complex scenarios. Identify and evangelize innovative patterns (e.g., automation, monitoring, testing) to improve data quality, system reliability, and developer velocity organization-wide. Spearhead major modernization efforts, including redesigns of legacy systems and adoption of cutting-edge tools to meet evolving analytical and operational needs. Embed operational rigor into data products through logging, observability, and documentation, empowering less-experienced teams to debug and extend systems independently. Continuously build your skills through regular code reviews, training, mentoring, and access to free trainings on Udemy for Business. About The Technology We leverage a range of technologies to support the development of quality data infrastructure, including: Snowflake as our enterprise data warehouse, with Airflow for workflow orchestration DBT, Prophecy, and Python for developing ELT processes Amazon Web Services as our public cloud provider, with configuration controlled by Terraform and Helm OpenSearch, Dynatrace, and Snowflake-native tooling for logging and monitoring. Git repositories hosted on Gitlab for code management. Atlassian (Jira, Confluence), Office365 (including Microsoft Teams), and Zoom for communication. Quality hardware to support development and communication on Windows or Mac platforms. We would love to hear from you if you have 9+ years of enterprise data engineering experience (data warehousing, ETL development, data modelling, scalable Enterprise Data Warehouse (EDW) solutions, etc.). 5+ years of experience leveraging Snowflake and its various capabilities. Examples of leverage dimensional modeling/star schema design concepts in enterprise implementations Experience with both DBT and Python development Snowflake performance tuning expertise Exceptional leadership and mentorship skills. Enthusiasm for data engineering work in a software-as-a-service company. Driven by client satisfaction. Strong communication and teamwork skills. Ability to manage own time and deliver expected results on time. Commitment to continuous learning and improvement. Exceptional problem-solving and analytical skills. Experience running data through a public cloud provider. About Clearwater Analytics Clearwater Analytics® is a global SaaS solution for automated investment data aggregation, reconciliation, accounting, and reporting. Clearwater helps thousands of organizations make the most of investment portfolio data with cloud-native software and client-centric servicing. Every day, investment professionals worldwide trust Clearwater to deliver timely, validated investment data and in-depth reporting. Clearwater aggregates, reconciles, and reports on more than $8 trillion in assets across many Fortune 500 clients. If you are passionate about joining a dynamic team and contributing to a world-class technology platform, we invite you to apply and be part of our mission to simplify the investment lifecycle. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Toyota Connected If you want to change the way the world works, transform the automotive industry and positively impact others on a global scale, then Toyota Connected is the right place for you! Within our collaborative, fast-paced environment we focus on continual improvement and work in a highly iterative way to deliver exceptional value in the form of connected products and services that wow and delight our customers and the world around us. About the Team Toyota Connected India is looking for an experienced Data Engineer to build and optimize data pipelines for a real-time Digital Twin platform powering mobility simulation, complex event processing, and multi-agent learning . You’ll design the backbone for scalable, low-latency ingestion and processing of high-volume sensor, vehicle, and infrastructure data to feed prediction models and simulations. What you will do · Design and implement streaming data pipelines from IoT sensors, camera, vehicle telemetry, and infrastructure systems. · Build scalable infrastructure using Kinesis, Apache Flink / Spark for real-time and batch workloads. · Enable t ime-series feature stores and sliding window processing for mobility patterns. · Integrate simulation outputs and model predictions into data lakes in AWS . · Maintain data validation, schema versioning, and high-throughput ingestion. · Collaborate with Data Scientists and Simulation Engineers to optimize data formats (e.g., Parquet, Protobuf, Delta Lake). · Deploy and monitor pipelines on AWS cloud and/or edge infrastructure. You are a successful candidate if you have · 3+ years of experience in data engineering , preferably with real-time systems. · Proficient with Python, SQL, and distributed data systems (Kinesis, Spark, Flink, etc.). · Strong understanding of event-driven architectures, data lakes, and message serialization . · Experience with sensor data processing, telemetry ingestion, or mobility data is a plus. · Familiarity with Docker, CI/CD, Kubernetes , and cloud-native architectures. · Familiarity with building data pipelines & its workflows (eg: Airflow). Preferred Qualifications: · Exposure to smart city platforms, V2X ecosystems or other timeseries paradigms . · Experience integrating data from Camera and other sensors. What is in it for you? · Top of the line compensation! · You'll be treated like the professional we know you are and left to manage your own time and workload. · Yearly gym membership reimbursement & Free catered lunches. · No dress code! We trust you are responsible enough to choose what’s appropriate to wear for the day. · Opportunity to build products that improves the safety and convenience of millions of customers · Cool office space and other awesome benefits! Our Core Values: EPIC Empathetic: We begin making decisions by looking at the world from the perspective of our customers, teammates, and partners. Passionate: We are here to build something great, not just for the money. We are always looking to improve the experience of our millions of customers Innovative: We experiment with ideas to get to the best solution. Any constraint is a challenge, and we love looking for creative ways to solve them. Collaborative: When it comes to people, we think the whole is greater than its parts and that everyone has a role to play in the success! To know more about us ,check out our glassdoor page-https://www.glassdoor.co.in/Reviews/TOYOTA-Connected-Corporation-Reviews-E3305334.htm Show more Show less
Posted 1 week ago
4.0 - 8.0 years
20 - 27 Lacs
Bengaluru
Work from Office
About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world's largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced Staff Software Engineer to join our Unified API platform team. Reporting to the Director, you'll be responsible for: Leading identification and resolution of performance issues by developing advanced AI/ML models that pinpoint root causes of poor experience and detect performance bottlenecks for users Developing, maintaining, and refining predictive models to forecast user behavior, system performance, and potential friction points in the digital experience Implementing advanced AI/ML algorithms to detect and forecast anomalies in user experience across multiple dimensions Overseeing the entire lifecycle of ML projects, including analysis, training, testing, building, and deploying ML models into production environments Designing and creating compelling visualizations to effectively communicate findings and insights to both technical and non-technical stakeholders What We're Looking for (Minimum Qualifications) A Bachelors or Master’s(preferable) degree in Computer Science, Data Science, Statistics, or a related field with 4+ years of professional experience in data science or a related role Proficiency with data science tools and platforms such as Python, R, TensorFlow, SQL, and related libraries and frameworks Strong experience with networking and end-point observability systems Expertise in multi-dimensional anomaly detection algorithms, with a specific focus on time series data sets Strong experience in building and deploying ML models in production environments, including model orchestration using tools like Kubernetes and Airflow What Will Make You Stand Out (Preferred Qualifications) Published research or contributions to digital experience/end-user observability/networking/ data science community Demonstrated experience in the monitoring space, with a deep understanding of user experience metrics, monitoring tools, and methodologies Experience with designing complex systems and scaling ML models on large scale distributed systems #LI-Hybrid #LI-AN4 At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.
Posted 1 week ago
0.0 - 2.0 years
3 - 6 Lacs
Noida
Work from Office
Required Skills: Absolute clarity in OOP fundamentals and Data-Structures Must have hands-on experience in Data Structure like List, Dict, Set, Strings, Lambda, etc Must have hands-on experience in working with Spark, Hadoop Excellent written and verbal communication and presentation skills Roles and responsibilities: Maintain and improve existing projects Collaborate with the technical team to develop new features and troubleshoot issues Lead projects to understand the requirements and distribute work to the technical team Follow the project/task timelines and quality.
Posted 1 week ago
8.0 - 10.0 years
0 Lacs
Chennai
Work from Office
Exp in Python development, Airflow 2.7+: In-depth experience using Apache Exp in Docker and Kubernetes for & scaling. with Helm for deploying Exp in setting up & maintaining CI/CD pipelines using Azure DevOps Exp with cloud platform
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hi All, Greetings from Shivsys Softwares. We are hiring for Lead Data Engineer. Role: Data Engineer Experience: 8+ Years Location: Pune Key Responsibilities - Python Apache Spark Apache Airflow You can also share your CV at karan.prajapati@shivsys.com Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! We are seeking a talented Sr. QA Engineer to deliver roadmap features of Enterprise TruRisk Platform which would help customers to Measure, Communicate and Eliminate Cyber Risks. The Lead QA Engineer will design, implement, document, and maintain testing frameworks. You will be responsible for the quality of core product capabilities using micro-services and Big Data based components. This is a fantastic opportunity to be an integral part of a team building Qualys next generation platform using Big Data & Micro-Services based technology to process over billions of transactions data per day, leverage open-source technologies, and work on challenging and business-impacting initiatives. Responsibilities: Perform functional testing of the Enterprise TruRisk Platform and its various modules. Conduct integration testing across different systems, working closely with cross-functional teams to ensure seamless data and service flow. Test Big Data ingestion and aggregation pipelines using Spark shell, SQL, and other data tools. Develop and maintain automation frameworks for functional and regression testing. Own and execute end-to-end workflow automation using custom or industry-standard frameworks. Define test strategies, test plans, and test cases for new features, platform enhancements, and services. Debug and troubleshoot issues identified in pre-production or production environments. Drive system performance testing of the platform and data applications. Define operational procedures, service monitors, alerting mechanisms, and coordinate implementation with the NOC team. Collaborate with product and engineering teams to review requirements, specifications, and technical designs, and ensure proper test coverage. Recreate complex production/customer issues to verify root causes and ensure resolution. Identify technical interdependencies, potential issues, and propose effective solutions. Requirements: 6 years of experience in the full-time Functional testing & Automation role as lead. Hands on experience in automating backend applications (e.g., database, REST API's). Hands on experience with automating any backend applications (e.g., database, server side). Knowledge of relational databases and SQL. Good debugging skills. Working experience working in Linux/Unix environment. Good understanding of testing methodologies. Good to have hands-on experience in working on Big Data technologies like Hadoop, Spark, Airflow, Kafka, Elastic and other distributed components. Experience in the Security domain is an advantage. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Designation - Data Centre IT networking & architecture expert Skill - Data Centre IT networking & architecture expert Experience - Above 10 Years Job Location - New Delhi Shift - Regular Working Days - 5 days a week (WFO) *Notice Period: Candidates who are immediate joiners or with maximum 30-45 days' notice period will be considered. Certification: CCNP - DC / JNCIP – DC ( mandatory) Requirements & Key Skills – Understanding the Project Requirements, Technical Specifications & Scope of work Experience in Data Centre Planning, Designing and Implementation of at least 2 large scale datacentres including Network, Compute, & Storage. Hands On Experience with (Multi/Cross Platform Products/solutions) Hands on experience in Architecture, Design, Deployment, Managing the High Availability solution for networking & security products. Development, Review/Rework of HLD, LLD, SoPs, ATP document, DC-DR Rack Layouts for Racking and Stacking Design, Deploy, Test Disaster Recovery for the products/Solutions at DC and DR. Should have prior experience in datacentre designing for high density RACKS Design and implement power distribution systems, optimize power usage efficiency and ensure redundancy to minimize downtime risks. Architect network infrastructure for Client data centre environments, including switches, routers, and firewalls and other security & utility solutions. Implement high-speed interconnects and design network topologies to support scalable and resilient connectivity. Develop rack layouts and configurations to maximize space utilization and airflow management, ensuring the Facilitation of RU Space for the smooth integration of additional planned security solutions (such as AntiAPT Solutions, HIDS/HIPS, ZTA etc), and take care of Intelligent cabling for these futuristic requirements. Design fault-tolerant architectures to ensure high availability and minimize service disruptions. Architect Networking, utility solutions tailored to meet performance, capacity, and data protection requirements. Optimize compute resources through virtualization and containerization technologies. Experience in integration of different IT Infra solutions. Support in VAPT Desirable: Scripting hands-on / knowledge PowerShell/Bash/Perl/Automation of migration process Note: This position doesn’t require expertise / experience in MEP&FP (Mechanical, Electrical, Plumbing, and Fire Protection) but should understand the concepts used for design. Benefits: We offer a competitive compensation and benefits package, as well as the opportunity to work on challenging and rewarding projects. Regards, Kapalins Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.
The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead
In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing
As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2