Jobs
Interviews

15 Data Frames Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Data Engineer II Location: Bangalore/Mumbai About Media.net : Media.net is a leading, global ad tech company that focuses on creating the most transparent and efficient path for advertiser budgets to become publisher revenue. Our proprietary contextual technology is at the forefront of enhancing Programmatic buying, the latest industry standard in ad buying for digital platforms. The Media.net platform powers major global publishers and ad-tech businesses at scale across ad formats like display, video, mobile, native, as well as search. Media.nets U.S. HQ is based in New York, and the Global HQ is in Dubai. With office locations and consultant partners across the world, Media.net takes pride in the value-add it offers to its 50+ demand and 21K+ publisher partners, in terms of both products and services. What does the team do Every single web page view hits one or more services and hosts high scale service to handle this large volume of requests, across 5 million unique topics. Some of the platforms built and managed by the team. To achieve this we use cutting edge Machine Learning and AI technologies on a large Hadoop cluster. Tech Stack Java, Elastic Search/Solr, Kafka, Spark, Machine Learning, NLP, Deep Learning, Redis, Big Data technologies like Hadoop, HBase, YARN etc. Roles and Responsibilities: Design, execution and management of a large and complex distributed data systems. Monitoring of performance and optimizing existing projects. Researching on and integrating any Big Data tools and frameworks required to provide requested capabilities. Understanding Business/Data requirements and implementing scalable solutions. Creating reusable components and data tools which help all the teams in the company to integrate with our data platform. Who should apply for this role 2 to 4 years of experience in big data technologies (Apache Hadoop) and relational databases (ms sql server/oracle/MySQL/postgres). Proficiency in at least one of the following programming languages - Java, Python or Scala. Expertise in SQL (T-SQL/PL-SQL/SPARK-SQL/HIVE-QL). Proficiency in Apache Spark. Hands on knowledge of working with Data Frames, Data Sets, RDDs, Spark SQL/PySpark/Scala APIs with deep understanding of Performance Optimizations. Good Understanding of Distributed Storage (HDFS/S3). Strong analytical/quantitative skills and comfortable working with very large sets of data. Experience with integration of data across multiple data sources. Good understanding of distributed computing principles. Good to have skills: Experience with Message Queues (e.g., Apache Kafka). Experience with MPP systems (e.g., Redshift/Snowflake). Experience with NoSQL storage (e.g., MongoDB). Show more Show less

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Join us as a Senior Developer at Barclays, where you will play a crucial role in supporting the successful delivery of Location Strategy projects while adhering to plan, budget, quality, and governance standards. Your primary responsibility will be to drive the evolution of our digital landscape, fostering innovation and excellence. By leveraging cutting-edge technology, you will lead the transformation of our digital offerings, ensuring unparalleled customer experiences. To excel in this role as a Senior Developer, you should possess the following experience and skills: - Solid hands-on development experience with Scala, Spark, Python, and Java. - Excellent working knowledge of Hadoop components such as HDFS, HIVE, Impala, HBase, and Data frames. - Proficiency in Jenkins builds pipeline or other CI/CD tools. - Sound understanding of Data Warehousing principles and Data Modeling. Additionally, highly valued skills may include: - Experience with AWS services like S3, Athena, DynamoDB, Lambda, and DataBricks. - Working knowledge of Jenkins, Git, and Unix. Your performance may be assessed based on critical skills essential for success in this role, including risk and controls management, change and transformation capabilities, business acumen, strategic thinking, and proficiency in digital and technology aspects. This position is based in Pune. **Purpose of the Role:** The purpose of this role is to design, develop, and enhance software solutions using various engineering methodologies to deliver business, platform, and technology capabilities for our customers and colleagues. **Accountabilities:** - Develop and deliver high-quality software solutions using industry-aligned programming languages, frameworks, and tools. Ensure that the code is scalable, maintainable, and optimized for performance. - Collaborate cross-functionally with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration with business objectives. - Engage in peer collaboration, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay updated on industry technology trends, contribute to the organization's technology communities, and foster a culture of technical excellence and growth. - Adhere to secure coding practices to mitigate vulnerabilities, protect sensitive data, and deliver secure software solutions. - Implement effective unit testing practices to ensure proper code design, readability, and reliability. **Assistant Vice President Expectations:** As an Assistant Vice President, you are expected to: - Provide consultation on complex issues, offering advice to People Leaders to resolve escalated matters. - Identify and mitigate risks, develop new policies/procedures to support the control and governance agenda. - Take ownership of risk management and control strengthening related to the work undertaken. - Engage in complex data analysis from various internal and external sources to creatively solve problems. - Communicate complex information effectively to stakeholders. - Influence or convince stakeholders to achieve desired outcomes. All colleagues are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as embrace the Barclays Mindset to Empower, Challenge, and Drive guiding principles for our behavior.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As a Data Engineer II at Media.net, you will be responsible for designing, executing, and managing large and complex distributed data systems. Your role will involve monitoring performance, optimizing existing projects, and researching and integrating Big Data tools and frameworks as required to meet business and data requirements. You will play a key part in implementing scalable solutions, creating reusable components and data tools, and collaborating with teams across the company to integrate with the data platform efficiently. The team you will be a part of ensures that every web page view is seamlessly processed through high-scale services, handling a large volume of requests across 5 million unique topics. Leveraging cutting-edge Machine Learning and AI technologies on a large Hadoop cluster, you will work with a tech stack that includes Java, Elastic Search/Solr, Kafka, Spark, Machine Learning, NLP, Deep Learning, Redis, and Big Data technologies such as Hadoop, HBase, and YARN. To excel in this role, you should have 2 to 4 years of experience in big data technologies like Apache Hadoop and relational databases (MS SQL Server/Oracle/MySQL/Postgres). Proficiency in programming languages such as Java, Python, or Scala is required, along with expertise in SQL (T-SQL/PL-SQL/SPARK-SQL/HIVE-QL) and Apache Spark. Hands-on knowledge of working with Data Frames, Data Sets, RDDs, Spark SQL/PySpark/Scala APIs, and deep understanding of Performance Optimizations will be essential. Additionally, you should have a good grasp of Distributed Storage (HDFS/S3), strong analytical and quantitative skills, and experience with data integration across multiple sources. Experience with Message Queues like Apache Kafka, MPP systems such as Redshift/Snowflake, and NoSQL storage like MongoDB would be considered advantageous for this role. If you are passionate about working with cutting-edge technologies, collaborating with global teams, and contributing to the growth of a leading ad tech company, we encourage you to apply for this challenging and rewarding opportunity.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Databricks AWS/Azure/GCP Architect at Koantek based in Mumbai, you will play a crucial role in building secure and highly scalable big data solutions that drive tangible, data-driven outcomes while emphasizing simplicity and operational efficiency. Collaborating with teammates, product teams, and cross-functional project teams, you will lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture. Your responsibilities will include implementing securely architected big data solutions that are operationally reliable, performant, and aligned with strategic initiatives. Your expertise should include an expert-level knowledge of data frameworks, data lakes, and open-source projects like Apache Spark, MLflow, and Delta Lake. You should possess hands-on coding experience in Spark/Scala, Python, or Pyspark. An in-depth understanding of Spark Architecture, including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, and Spark MLib, is essential for this role. Experience in IoT/event-driven/microservices in the cloud, familiarity with private and public cloud architectures, and extensive hands-on experience in implementing data migration and data processing using AWS/Azure/GCP services are key requirements. With over 9 years of consulting experience and a minimum of 7 years in data engineering, data platform, and analytics, you should have a proven track record of delivering projects with hands-on development experience on Databricks. Your knowledge of at least one cloud platform (AWS, Azure, or GCP) is mandatory, along with deep experience in distributed computing with Spark and familiarity with Spark runtime internals. Additionally, you should be familiar with CI/CD for production deployments, optimization for performance and scalability, and have completed data engineering professional certification and required classes. If you are a results-driven professional with a passion for architecting cutting-edge big data solutions and have the desired skill set, we encourage you to apply for this exciting opportunity.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Intermediate Programmer Analyst position at our organization involves working at an intermediate level to assist in the development and implementation of new or updated application systems and programs in collaboration with the Technology team. Your main responsibility will be to contribute to application systems analysis and programming activities. You will be expected to utilize your knowledge of applications development procedures and concepts, as well as basic knowledge of other technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing code, and consulting with users, clients, and other technology groups to recommend programming solutions. Additionally, you will be involved in installing and supporting customer exposure systems and applying programming languages for design specifications. As an Applications Development Intermediate Programmer Analyst, you will also be responsible for analyzing applications to identify vulnerabilities and security issues, conducting testing and debugging, and serving as an advisor or coach to new or lower-level analysts. You should be able to identify problems, analyze information, and make evaluative judgments to recommend and implement solutions with a limited level of direct supervision. Furthermore, you will play a key role in resolving issues by selecting solutions based on your technical experience and guided by precedents. You will have the opportunity to exercise independence of judgment and autonomy, act as a subject matter expert to senior stakeholders and team members, and appropriately assess risk when making business decisions. To qualify for this role, you should have 4-8 years of relevant experience in the Financial Service industry, intermediate level experience in an Applications Development role, clear and concise written and verbal communication skills, problem-solving and decision-making abilities, and the capacity to work under pressure and manage deadlines or unexpected changes in expectations or requirements. A Bachelor's degree or equivalent experience is required for this position. In addition to the responsibilities outlined above, the ideal candidate should possess expertise in various technical areas, including strong JAVA programming skills, object-oriented programming, data structures, design patterns, Spark frameworks like flask and Django, Big Data technologies such as Pyspark and Hadoop ecosystem components, and REST web services. Experience in Spark performance tuning, PL SQL, SQL, Transact-SQL, data processing in different file types, UI frameworks, source code management tools like git, Agile methodology, and issue trackers like Jira is highly desirable. This job description offers a comprehensive overview of the role's responsibilities and qualifications. Please note that other job-related duties may be assigned as necessary. If you require a reasonable accommodation due to a disability to use our search tools or apply for a career opportunity, please review our Accessibility at Citi. For additional information, you can view Cit's EEO Policy Statement and the Know Your Rights poster.,

Posted 1 week ago

Apply

4.0 - 9.0 years

8 - 18 Lacs

bengaluru

Hybrid

*** We are looking for immediate joiners at our Bangalore Office *** Overview: The resources should have solid fundamentals in the technologies that are current used for the ETL project, i.e. SQL, Python, Data Bricks, and Azure Data Factory. Roles & Responsibilities: Design, build, and maintain ETL pipelines in Azure (ADF, Databricks). Develop and optimize SQL queries, transformations, and data frames. Write clean, reusable Python/SQL code for data engineering tasks. Ensure data quality, reliability, visibility, and performance across pipelines. Collaborate effectively within the team and in Agile delivery cycles. Desired Skill Set : Proficiency in Azure Data Factory, SQL, Azure Databricks (PySpark), and Python. Experience with pipeline-based development. (Parameterization, reusable components, incremental loading, error handling, and data monitoring/notifications) Knowledge of ETL best practices, especially metadata-driven design and scalable pipeline development.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

14 - 22 Lacs

pune, bengaluru, mumbai (all areas)

Hybrid

Job Title: Ab Initio Developer Job Summary: We're looking for an experienced Ab Initio developer to join our team. As an Ab Initio developer, you will design, develop, test, and implement data integration and data processing applications using Ab Initio software. Your primary focus will be on leveraging Ab Initio's capabilities to meet our organization's data warehousing and business intelligence needs . Key Responsibilities: Design and develop Ab Initio graphs, components, and applications to meet business requirements. Write efficient and scalable Ab Initio code, including data transformations, data validation, and data loading. Collaborate with cross-functional teams to identify data integration requirements and develop solutions. Optimize Ab Initio graphs for performance, scalability, and reliability. Troubleshoot and resolve issues related to Ab Initio applications and data integration processes. Develop and maintain technical documentation for Ab Initio applications and processes. Participate in code reviews and ensure adherence to best practices and coding standards. Requirements: 3+ years of experience in Ab Initio development, including designing and developing Ab Initio graphs and applications. Strong knowledge of Ab Initio software, including Ab Initio Graph, Ab Initio Meta>Environment, and Ab Initio Enterprise Meta>Environment. Experience with data integration, data warehousing, and business intelligence concepts. Strong programming skills in Ab Initio's proprietary language (PDL) and familiarity with other programming languages (e.g., Perl, Python). Experience with data modeling, data governance, and data quality. Strong analytical and problem-solving skills, with attention to detail and ability to work in a fast-paced environment. Nice to Have: Experience with cloud-based Ab Initio environments. Knowledge of database management systems (e.g., Oracle, Teradata). Familiarity with data integration tools (e.g., Informatica, Talend). Experience with Agile development methodologies.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data/AWS Engineer at Waters Global Research, you will be part of a dynamic team focused on researching and developing self-diagnosing, self-healing instruments to enhance the user experience of our customers. By leveraging cutting-edge technologies and innovative solutions, you will play a crucial role in advancing our analytical chemistry instruments that have a direct impact on various fields such as laboratory testing, drug discovery, and food safety. Your primary responsibility will be to develop data pipelines for specialty instrument data and Gen AI processes, train machine learning models for error diagnosis, and automate manual processes to optimize instrument procedures. You will work on projects aimed at interpreting raw data results, cleaning anomalous data, and deploying models in AWS to collect and analyze results effectively. Key Responsibilities: - Build data pipelines in AWS using services like S3, Lambda, IoT core, and EC2. - Create and maintain dashboards to monitor data health and performance. - Containerize models and deploy them in AWS for efficient data processing. - Develop Python data pipelines to handle data frames and matrices, ensuring smooth data ingestion, transformation, and storage. - Collaborate with Machine Learning engineers to evaluate data and models, and present findings to stakeholders. - Mentor and review code of team members to ensure best coding practices and adherence to standards. Qualifications: Required Qualifications: - Bachelor's degree in computer science or related field with 5-8 years of relevant work experience. - Proficiency in AWS services such as S3, EC2, Lambda, and IAM. - Experience with containerization and deployment of code in AWS. - Strong programming skills in Python for OOP and/or functional programming. - Familiarity with Git, BASH, and command prompt. - Ability to drive new capabilities, solutions, and data best practices from technical documentation. - Excellent communication skills to convey results effectively to non-data scientists. Desired Qualifications: - Experience with C#, C++, and .NET considered a plus. What We Offer: - Hybrid role with competitive compensation and great benefits. - Continuous professional development opportunities. - Inclusive environment that encourages contributions from all team members. - Reasonable adjustments to the interview process based on individual needs. Join Waters Corporation, a global leader in specialty measurement, and be part of a team that drives innovation in chromatography, mass spectrometry, and thermal analysis. With a focus on creating business advantages for various industries, including life sciences, materials, and food sciences, we aim to transform healthcare delivery, environmental management, food safety, and water quality. At Waters, we empower our employees to unlock their full potential, learn, grow, and make a tangible impact on human health and well-being. We value collaboration, problem-solving, and innovation to address the challenges of today and tomorrow. Join us to be part of a team that delivers benefits as one and provides insights for a better future.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Job Description: We are looking for a skilled PySpark Developer having 4-5 or 2-3 years of experience to join our team. As a PySpark Developer, you will be responsible for developing and maintaining data processing pipelines using PySpark, Apache Spark's Python API. You will work closely with data engineers, data scientists, and other stakeholders to design and implement scalable and efficient data processing solutions. Bachelor's or Master's degree in Computer Science, Data Science, or a related field is required. The ideal candidate should have strong expertise in the Big Data ecosystem including Spark, Hive, Sqoop, HDFS, Map Reduce, Oozie, Yarn, HBase, Nifi. The candidate should be below 35 years of age and have experience in designing, developing, and maintaining PySpark data processing pipelines to process large volumes of structured and unstructured data. Additionally, the candidate should collaborate with data engineers and data scientists to understand data requirements and design efficient data models and transformations. Optimizing and tuning PySpark jobs for performance, scalability, and reliability is a key responsibility. Implementing data quality checks, error handling, and monitoring mechanisms to ensure data accuracy and pipeline robustness is crucial. The candidate should also develop and maintain documentation for PySpark code, data pipelines, and data workflows. Experience in developing production-ready Spark applications using Spark RDD APIs, Data frames, Datasets, Spark SQL, and Spark Streaming is required. Strong experience of HIVE Bucketing and Partitioning, as well as writing complex hive queries using analytical functions, is essential. Knowledge in writing custom UDFs in Hive to support custom business requirements is a plus. If you meet the above qualifications and are interested in this position, please email your resume, mentioning the position applied for in the subject column at: careers@cdslindia.com.,

Posted 1 month ago

Apply

5.0 - 8.0 years

17 - 25 Lacs

Bengaluru

Hybrid

Sr. No. Designation Python Application Developer 1 Job Profile Web Developer is responsible for developing SW components based on the Architecture / Design provided. Main Responsibilities: Technical ownership and accountable for specific Components/modules of development projects Analyze the requirements and come up with estimates for the assigned modules. Write Feature Specification / Detail Design documents for the modules Be able to develop and own software components. Working from Bangalore location. Desired Qualification and Experience Qualification : B.E/B.Tech (Computer Science or equivalent) Experience: Around 5 years of experience in software development, preferably in the area of Industrial automation domain. Knowledge and Capabilities: Mandatory / Primary Skills: Programming experience in Python numeric eco system Very good experience in Pandas, Numpy, Data Frames is must Good knowledge in SQL Database concepts Programming experience in Web Application development using java script, type script, HTML / CSS Object oriented programming Desired / Secondary Skills Programming experience in Python framework HoloViz / Panel / Jupyter Notebook Experience in working on Distributed Control Systems (DCS), Communication protocols and OPC concepts in industrial automation domain. Practical knowledge/general experience: software design, software development. Soft Skills Excellent communication skills and ability to take up technical challenges Good analytical and problem-solving skills (able to analyze software requirements) Ability to collaborate with architects and team members and willingness to learn new topics like PCS7, ASM Role & responsibilities Preferred candidate profile

Posted 3 months ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Designation : Python + AWS Experience : 5+ Years Work Location : Bangalore / Mumbai Notice Period: Immediate Joiners/ Serving Notice Period Job Description : Mandatory Skills: Python Data structures pandas, numpy Data Operations - DataFrames, Dict, JSON, Lists, Tuples, Strings Oops & APIs(Flask/FastAPI) AWS services(IAM, EC2, Lambda, S3, DynamoDB, etc) Sincerely, Sonia TS

Posted 3 months ago

Apply

9.0 - 12.0 years

35 - 40 Lacs

Bengaluru

Work from Office

We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.

Posted 3 months ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pandas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.

Posted 3 months ago

Apply

5 - 10 years

15 - 20 Lacs

Bengaluru

Work from Office

Role & responsibilities Urgent hiring for one of the reputed MNC Data Analyst Exp - 5 - 10 Years Only immediate joiners Location - Bangalore JD: Data Analyst Mandatory SKILLS 1. SQL : Proficient in database object creation including tables, views, indexes etc. Strong expertise in SQL queries ,Stored procedure & Function etc. Experienced in performance tuning & optimization techniques. 2.PowerBI : Proficiency in Power BI development, including report and dashboard creation Design, develop, and maintain complex Power BI data models, ensuring data integrity and consistency. Comprehensive understanding of data modeling and data visualization concepts Identify and resolve performance bottlenecks in Power BI reports and data models. Experience with Power Query & DAX 3. Problem-Solving Skills: Strong analytical and problem-solving skills to identify and resolve data-related issues. 4.Python : Strong proficiency in Python programming. 5.PySpark: Extensive experience with PySpark, including DataFrames & SparkSQL. Preferred candidate profile

Posted 4 months ago

Apply

7 - 11 years

50 - 60 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 4 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies