Jobs
Interviews

8765 Pyspark Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Wipro Limited is a leading technology services and consulting company that focuses on creating innovative solutions to meet the complex digital transformation needs of clients. With a holistic portfolio of capabilities in consulting, design, engineering, and operations, Wipro helps clients achieve their boldest ambitions and develop sustainable, future-ready businesses. With a global presence of over 230,000 employees and business partners across 65 countries, Wipro is committed to helping customers, colleagues, and communities thrive in an ever-changing world. We are currently looking for an ETL Test Lead with the following qualifications: Primary Skill: ETL Testing Secondary Skill: Azure Key Requirements: - 5+ years of experience in data warehouse testing, with at least 2 years of experience in Azure Cloud - Strong understanding of data marts and data warehouse concepts - Expertise in SQL with the ability to create source-to-target comparison test cases - Proficient in creating test plans, test cases, traceability matrix, and closure reports - Familiarity with Pyspark, Python, Git, Jira, and JTM Band: B3 Location: Pune, Chennai, Coimbatore, Bangalore Mandatory Skills: ETL Testing Experience: 5-8 Years At Wipro, we are in the process of building a modern organization that is focused on digital transformation. We are looking for individuals who are inspired by reinvention and are committed to evolving themselves, their careers, and their skills. Join us in our journey to constantly evolve and adapt to the changing world around us. Come to Wipro and realize your ambitions. We welcome applications from individuals with disabilities.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have a minimum of 5-7 years of experience in data engineering and transformation on the Cloud, with a strong focus on Azure Data Engineering and Databricks for at least 3 years. Your expertise should include supporting and developing data warehouse workloads at an enterprise level. Proficiency in pyspark is essential for developing and deploying workloads to run on the Spark distributed computing platform. A Bachelor's degree in Computer Science, Information Technology, Engineering (Computer/Telecommunication), or a related field is required for this role. Experience with cloud deployment, preferably on Microsoft Azure, is highly desirable. You should also have experience in implementing platform and application monitoring using Cloud native tools, as well as implementing application self-healing solutions through proactive and reactive automated measures.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As an Associate Managing Consultant in Strategy & Transformation at Mastercard's Performance Analytics division, you will be a part of the Advisors & Consulting Services group specializing in translating data into actionable insights. Your role will involve leveraging both Mastercard and customer data to design, implement, and scale analytical solutions for clients. By utilizing qualitative and quantitative analytical techniques and enterprise applications, you will synthesize analyses into clear recommendations and impactful narratives. In this position, you will manage deliverable development and workstreams on projects spanning various industries and problem statements. You will contribute to developing analytics strategies for large clients, leveraging data and technology solutions to unlock client value. Building and maintaining trusted relationships with client managers will be crucial, as you act as a reliable partner in creating predictive models and reviewing analytics end-products for accuracy, quality, and timeliness. Collaboration and teamwork play a significant role in this role, where you will be tasked with developing sound business recommendations, delivering effective client presentations, and leading team and external meetings. Your responsibilities will also include contributing to the firm's intellectual capital, mentoring junior consultants, and fostering effective working relationships with local and global teams. To be successful in this role, you should possess an undergraduate degree with experience in data and analytics, business intelligence, and descriptive, predictive, or prescriptive analytics. You should be adept at analyzing large datasets, synthesizing key findings, and providing recommendations through descriptive analytics and business intelligence. Proficiency in data analytics software such as Python, R, SQL, and SAS, as well as advanced skills in Word, Excel, and PowerPoint, are essential. Effective communication in English and the local office language, eligibility to work in the country of application, and a proactive attitude towards learning and growth are also required. Preferred qualifications for this role include additional experience working with the Hadoop framework, data visualization tools like Tableau and Power BI, and coaching junior delivery consultants. While an MBA or master's degree with a relevant specialization is not mandatory, having relevant industry expertise would be advantageous. At Mastercard, we prioritize information security, and every individual associated with the organization is expected to abide by security policies, maintain the confidentiality and integrity of accessed information, report any security violations or breaches, and complete required security trainings to ensure the protection of Mastercard's assets, information, and networks.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

gujarat

On-site

As a Data Scientist at Micron Technology in Sanand Gujarat, you will have the opportunity to play a pivotal role in transforming how the world uses information to enrich life for all. Micron Technology is a global leader in innovating memory and storage solutions, driving the acceleration of information into intelligence and inspiring advancements in learning, communication, and progress. Your responsibilities will involve a broad range of tasks, including but not limited to: - Developing a strong career path as a Data Scientist in highly automated industrial manufacturing, focusing on analysis and machine learning of terabytes and petabytes of diverse datasets. - Extracting data from various databases using SQL and other query languages, and applying data cleansing, outlier identification, and missing data techniques. - Applying the latest mathematical and statistical techniques to analyze data and identify patterns. - Building web applications as part of your job scope. - Utilizing Cloud-based Analytics and Machine Learning Modeling. - Building APIs for application integration. - Engaging in statistical modeling, feature extraction and analysis, feature engineering, supervised/unsupervised/semi-supervised learning. - Demonstrating proficiency in data analysis and validation, as well as strong software development skills. In addition to the above, you should possess above-average skills in: - Programming fluency in Python. - Knowledge of statistics, machine learning, and other advanced analytical methods. - Familiarity with javascript, AngularJS 2.0, and Tableau, with a background in OOPS considered an advantage. - Understanding pySpark and/or libraries for distributed and parallel processing. - Experience with Tensorflow and/or other statistical software with scripting capabilities. - Knowledge of time series data, images, semi-supervised learning, and data with frequently changing distributions is a plus. - Understanding of Manufacturing Execution Systems (MES) is beneficial. You should be able to work in a dynamic, fast-paced environment, be self-motivated, adaptable to new technologies, and possess a passion for data and information with excellent analytical, problem-solving, and organizational skills. Furthermore, effective communication with distributed teams (written, verbal, and presentation) and the ability to work collaboratively towards common objectives are key attributes for this role. To be eligible for this position, you should hold a Bachelors or Masters degree in Computer Science or Electrical/Electronic Engineering, with a CGPA of 7.0 and above. Join Micron Technology, Inc., where our relentless focus on customers, technology leadership, and operational excellence drives the creation of high-performance memory and storage products that power the data economy. Visit micron.com/careers to learn more about our innovative solutions and opportunities for growth. For any assistance with the application process or to request reasonable accommodations, please reach out to hrsupport_india@micron.com. Micron Technology strictly prohibits the use of child labor and complies with all applicable laws, rules, regulations, and international labor standards. Candidates are encouraged to use AI tools to enhance their application materials, ensuring accuracy and truthfulness in representing their skills and experiences. Fabrication or misrepresentation will lead to immediate disqualification. As a Data Scientist at Micron Technology, you will be part of a transformative journey that shapes the future of information utilization and enriches lives across the globe.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

The role of S&C GN AI - Insurance AI Generalist Consultant at Accenture Global Network involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. As a Team Lead/Consultant at Bengaluru, BDC7C location, you will provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. In this position, you will be part of a unified powerhouse that combines the capabilities of Strategy & Consulting with Data and Artificial Intelligence. You will work on architecting, designing, building, deploying, delivering, and monitoring advanced analytics models, including Generative AI, for various client problems. Additionally, you will develop functional aspects of Generative AI pipelines and interface with clients to understand engineering/business problems. The ideal candidate for this role should have 5+ years of experience in data-driven techniques, a Bachelor's/Master's degree in Mathematics, Statistics, Economics, Computer Science, or a related field, and a solid foundation in Statistical Modeling and Machine Learning algorithms. Proficiency in programming languages such as Python, PySpark, SQL, Scala is required, as well as experience implementing AI solutions for the Insurance industry. Strong communication, collaboration, and presentation skills are essential to effectively convey complex data insights and recommendations to clients and stakeholders. Furthermore, hands-on experience with Azure, AWS, or Databricks tools is a plus, and familiarity with GenAI, LLMs, RAG architecture, and Lang chain frameworks is beneficial. This role offers an opportunity to work on innovative projects, career growth, and leadership exposure within Accenture, a global community that continually pushes the boundaries of business capabilities. If you are a motivated individual with strong analytical, problem-solving, and communication skills, and the ability to thrive in a fast-paced, dynamic environment, this role provides an exciting opportunity to contribute to Accenture's future growth and be a part of a vibrant global community.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

kolkata, west bengal

On-site

We are looking for an experienced professional with strong mathematical and statistical expertise, as well as a natural curiosity and creative mindset to uncover hidden opportunities within data. Your primary goal will be to realize the full potential of the data by asking questions, connecting dots, and thinking innovatively. Responsibilities: - Design and implement scalable and efficient data storage solutions using Snowflake. - Write, optimize, and troubleshoot SQL queries within the Snowflake environment. - Provide forward-thinking solutions in the data engineering and analytics space. - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Identify gaps in existing pipelines and resolve issues. - Develop data models to meet reporting needs by working closely with the business. - Assist team members in resolving technical challenges. - Engage in technical discussions with client architects and team members. - Orchestrate data pipelines in scheduler via Airflow. - Integrate Snowflake with various data sources and third-party tools. Skills and Qualifications: - Bachelor's and/or master's degree in computer science or equivalent experience. - Minimum 7 years of experience in Data & Analytics with strong communication and presentation skills. - At least 6 years of experience in Snowflake implementations and large-scale data warehouse end-to-end implementation. - Databricks certified architect. - Proficiency in SQL and scripting languages (e.g., Python, Spark, PySpark) for data manipulation and automation. - Solid understanding of cloud platforms (AWS, Azure, GCP) and their integration with data tools. - Familiarity with data governance and data management practices. - Exposure to Data sharing, unity catalog, DBT, replication tools, and performance tuning will be advantageous. About Tredence: Tredence focuses on delivering powerful insights into profitable actions by combining business analytics, data science, and software engineering. We work with leading companies worldwide, providing prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia. Tredence is an equal opportunity employer that values diversity and is dedicated to fostering an inclusive environment for all employees. To learn more about us, visit our website: [Tredence Website](https://www.tredence.com/),

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Principal Engineer / Architect at our organization, you will be responsible for combining deep technical expertise with strategic thinking to design and implement scalable, secure, and modern digital systems. This senior technical leadership role requires hands-on architecture experience, a strong command of cloud-native development, and a successful track record of leading teams through complex solution delivery. Your role will involve collaborating with cross-functional teams including engineering, product, DevOps, and business stakeholders to define technical roadmaps, ensure alignment with enterprise architecture principles, and guide platform evolution. Key Responsibilities: Architecture & Design: - Lead the design of modular, microservices-based, and secure architecture for scalable digital platforms. - Define and enforce cloud-native architectural best practices using Azure, AWS, or GCP. - Prepare high-level design artefacts, interface contracts, data flow diagrams, and service blueprints. Cloud Engineering & DevOps: - Drive infrastructure design and automation using Terraform or CloudFormation. - Support Kubernetes-based container orchestration and efficient CI/CD pipelines. - Optimize for performance, availability, cost, and security using modern observability stacks and metrics. Data & API Strategy: - Architect systems that handle structured and unstructured data with performance and reliability. - Design APIs with reusability, governance, and lifecycle management in mind. - Guide caching, query optimization, and stream/batch data pipelines across the stack. Technical Leadership: - Act as a hands-on mentor to engineering teams, leading by example and resolving architectural blockers. - Review technical designs, codebases, and DevOps pipelines to uphold engineering excellence. - Translate strategic business goals into scalable technology solutions with pragmatic trade-offs. Key Requirements: Must Have: - 5+ years in software architecture or principal engineering roles with real-world system ownership. - Strong experience in cloud-native architecture with AWS, Azure, or GCP (certification preferred). - Programming experience with Java, Python, or Node.js, and frameworks like Flask, FastAPI, Celery. - Proficiency with PostgreSQL, MongoDB, Redis, and scalable data design patterns. - Expertise in Kubernetes, containerization, and GitOps-style CI/CD workflows. - Strong foundation in Infrastructure as Code (Terraform, CloudFormation). - Excellent verbal and written communication; proven ability to work across technical and business stakeholders. Nice to Have: - Experience in MLOps pipelines, observability stacks (ELK, Prometheus/Grafana), and tools like MLflow, Langfuse. - Familiarity with Generative AI frameworks (LangChain, LlamaIndex), Vector Databases (Milvus, ChromaDB). - Understanding of event-driven, serverless, and agentic AI architecture models. - Python libraries such as pandas, NumPy, PySpark and support for multi-component pipelines (MCP). Preferred: - Prior experience leading technical teams in regulated domains (finance, healthcare, govtech). - Cloud security, cost optimization, and compliance-oriented architectural mindset. What You'll Gain: - Work on mission-critical projects using the latest cloud, data, and AI technologies. - Collaborate with a world-class, cross-disciplinary team. - Opportunities to contribute to open architecture, reusable frameworks, and technical IP. - Career advancement via leadership, innovation labs, and enterprise architecture pathways. - Competitive compensation, flexibility, and a culture that values innovation and impact.,

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

We are looking for Data Engineers with expertise in SAS, Python, and PySpark to support code migration and data migration projects from legacy environments to cloud platforms. This role will entail hands-on experience leveraging EXL’s Generative AI solution named Code Harbor to streamline migration processes, automate code refactoring, and optimize data transformation. The ideal candidate will have 5+ years of relevant experience in IT services, with strong knowledge of modernizing data pipelines, transforming legacy codebases, and optimizing big data processing for cloud infrastructure. Key Responsibilities Code migration from SAS/ legacy systems to Python/ cloud-native frameworks. Develop and optimize enhanced data pipelines using PySpark for efficient cloud-based processing. Refactor and modernize legacy SAS-based workflows, ensuring seamless AI-assisted translation for cloud execution. Ensure data integrity, security, and performance throughout the migration lifecycle. Troubleshoot AI-generated outputs to refine accuracy and resolve migration-related challenges. Required Skills & Qualifications Strong expertise in SAS, Python, and PySpark, with experience in code migration and data transformation. Strong problem-solving skills and adaptability in fast-paced AI-driven migration projects. Excellent communication and collaboration skills to work with cross-functional teams. Education Background Bachelor’s or master’s degree in computer science, Engineering, or a related field. Tier I/II candidates preferred.

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

We are looking for Data Engineers with expertise in SAS, Python, and PySpark to support code migration and data migration projects from legacy environments to cloud platforms. This role will entail hands-on experience leveraging EXL’s Generative AI solution named Code Harbor to streamline migration processes, automate code refactoring, and optimize data transformation. The ideal candidate will have 2-3 years of relevant experience in IT services, with strong knowledge of modernizing data pipelines, transforming legacy codebases, and optimizing big data processing for cloud infrastructure. Key Responsibilities Code migration from SAS/ legacy systems to Python/ cloud-native frameworks. Develop and optimize enhanced data pipelines using PySpark for efficient cloud-based processing. Refactor and modernize legacy SAS-based workflows, ensuring seamless AI-assisted translation for cloud execution. Ensure data integrity, security, and performance throughout the migration lifecycle. Troubleshoot AI-generated outputs to refine accuracy and resolve migration-related challenges. Required Skills & Qualifications Strong expertise in SAS, Python, and PySpark, with experience in code migration and data transformation. Strong problem-solving skills and adaptability in fast-paced AI-driven migration projects. Excellent communication and collaboration skills to work with cross-functional teams. Education Background Bachelor’s or master’s degree in computer science, Engineering, or a related field. Tier I/II candidates preferred.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

tamil nadu

On-site

Wipro Limited is a leading technology services and consulting company committed to developing innovative solutions that cater to clients" most intricate digital transformation requirements. With a comprehensive range of capabilities in consulting, design, engineering, and operations, we empower clients to achieve their most ambitious goals and establish sustainable, future-ready businesses. Our global presence spans across 65 countries with over 230,000 employees and business partners, dedicated to supporting our customers, colleagues, and communities in thriving amidst an ever-evolving world. We are currently seeking a Sr ETL Test Engineer with the following qualifications: - Primary Skill: ETL Testing - Secondary Skill: Azure The ideal candidate should possess: - At least 5 years of experience in data warehouse testing and a minimum of 2 years of Azure Cloud experience. - Profound understanding of data marts and data warehouse concepts. - Proficiency in SQL, with the ability to develop source-to-target comparison test cases in SQL. - Capability to create test plans, test cases, traceability matrix, and closure reports. - Proficient in Pyspark, Python, Git, Jira, JTM. Location: Pune, Chennai, Coimbatore, Bangalore Band: B2 and B3 Mandatory Skills: ETL Testing Experience Required: 3-5 Years Join us in reinventing the future at Wipro. We are transforming into a modern organization, striving to be an end-to-end digital transformation partner with the most audacious aspirations. We are looking for individuals who are inspired by reinvention, eager to evolve themselves, their careers, and their skills. At Wipro, we embrace change as it is inherent in our DNA. We invite you to be part of a purpose-driven business that encourages you to craft your own reinvention. Realize your aspirations with us at Wipro. We welcome applications from individuals with disabilities.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

The Specialized Analytics Manager role provides full leadership and supervisory responsibility within a team. You will offer operational/service leadership and guidance to the team, applying your in-depth disciplinary knowledge to provide value-added perspectives and advisory services. Your responsibilities may include contributing to the development of new techniques, models, and plans within your area of expertise. Excellent communication and diplomacy skills are essential for this role. You will be responsible for the volume, quality, and timeliness of end results, as well as shared responsibility for planning and budgets. Your work will have a significant impact on the overall performance and effectiveness of the sub-function/job family. As a manager, you will oversee the motivation and development of the team through professional leadership, including performance evaluation, compensation, hiring, disciplinary actions, terminations, and daily task direction. In this role, you will work with large and complex data sets, both internal and external, to evaluate, recommend, and support the implementation of business strategies. This will involve identifying and compiling data sets using tools such as SAS, SQL, and Pyspark to help predict, improve, and measure the success of key business outcomes. You will be responsible for documenting data requirements, data collection, processing, cleaning, and exploratory data analysis, which may involve utilizing statistical models, algorithms, and data visualization techniques. Individuals in this role may often be referred to as Data Scientists and will specialize in digital and marketing analytics. Additionally, you will need to appropriately assess risk when making business decisions, with a focus on safeguarding Citigroup, its clients, and assets. This includes driving compliance with laws, rules, and regulations, adhering to policies, applying ethical judgment, and effectively supervising the activity of others to maintain high standards of conduct. Qualifications for this role include experience in a People Manager position, a strong understanding of Adobe Analytics, proficiency in SAS and Python, excellent communication skills for coordination with senior business leaders, a good grasp of financials and PNL metrics, background in financial services with an understanding of credit card business, and preferably exposure to Digital Business and knowledge of Digital Performance KPIs. This job description offers a comprehensive overview of the responsibilities and qualifications required for the role. Other job-related duties may be assigned as necessary.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an ideal candidate for this role, you will be responsible for designing and architecting scalable Big Data solutions within the Hadoop ecosystem. Your key duties will include leading architecture-level discussions for data platforms and analytics systems, constructing and optimizing data pipelines utilizing PySpark and other distributed computing tools, translating business requirements into scalable data models and integration workflows, as well as ensuring the high performance and availability of enterprise-grade data processing systems. In addition, you will play a crucial role in mentoring development teams and offering guidance on best practices and performance tuning. To excel in this position, you must possess architect-level experience with Big Data ecosystem and enterprise data solutions. Proficiency in Hadoop, PySpark, and distributed data processing frameworks is essential, along with strong hands-on experience in SQL and data warehousing concepts. A deep understanding of data lake architecture, data ingestion, ETL, and orchestration tools is also required. Your experience in performance optimization and handling large-scale data sets, coupled with excellent problem-solving, design, and analytical skills, will be highly valued. While not mandatory, exposure to cloud platforms like AWS, Azure, or GCP for data solutions would be a beneficial asset. Additionally, familiarity with data governance, data security, and metadata management is considered a good-to-have skill set for this role. Joining our team offers you the opportunity to work with cutting-edge Big Data technologies, gain leadership exposure, and directly participate in architectural decisions. This is a stable, full-time position within a top-tier tech team, offering a conducive work-life balance with a standard 5-day working schedule. If you are passionate about Big Data technologies and eager to contribute to innovative solutions, we welcome your application for this exciting opportunity.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5+ years of experience in data analysis, engineering, and science. Your proficiency should include Azure Data Factory, Azure DataBricks, Python, PySpark, SQL, and PLSQL or SAS. Your responsibilities will involve designing, developing, and maintaining ETL pipelines using Azure Data Bricks, Azure Data Factory, and other relevant technologies. You will be expected to manage and optimize data storage solutions using Azure Data Lake Storage (ADLS) and develop and deploy data processing workflows using Pyspark and Python. Collaboration with data scientists, analysts, and stakeholders to understand data requirements and ensure data quality is essential. Implementing data integration solutions, ensuring seamless data flow across systems, and utilizing Github for version control and collaboration on the codebase are also part of the role. Monitoring and troubleshooting data pipelines to guarantee data accuracy and availability is crucial. It is imperative to stay updated with the latest industry trends and best practices in data engineering.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

The primary focus of this role will be to perform development work within the Azure Data Lake environment and other related ETL technologies. You will be responsible for ensuring on-time and on-budget delivery, satisfying project requirements while adhering to enterprise architecture standards. In addition, this role will involve L3 responsibilities for ETL processes. Your responsibilities will include delivering key Azure Data Lake projects within the specified time and budget. You will contribute to solution design and build to ensure scalability, performance, and reuse of data and other components. It is essential to ensure on-time and on-budget delivery that meets project requirements while following enterprise architecture standards. Strong problem-solving abilities are required, focusing on managing business outcomes through collaboration with various internal and external stakeholders. You should be enthusiastic, willing to learn, and continuously develop skills and techniques, embracing change and seeking continuous improvement. Effective communication, both written and verbal, with good presentational skills in the English language is necessary. Being customer-focused and a team player is also important. Qualifications: - Bachelor's degree in computer science, MIS, Business Management, or related field - Minimum 5 years of experience in Information Technology - Minimum 4 years of experience in Azure Data Lake Technical Skills: - Proven experience in development activities in Data, BI, or Analytics projects - Experience in solutions delivery with knowledge of system development lifecycle, integration, and sustainability - Strong knowledge of Pyspark and SQL - Good understanding of Azure Data Factory or Databricks - Desirable knowledge of Presto/Denodo - Desirable knowledge of FMCG business processes Non-Technical Skills: - Excellent remote collaboration skills - Experience working in a matrix organization with diverse priorities - Exceptional written and verbal communication skills, collaboration, and listening skills - Ability to work with agile delivery methodologies - Ability to ideate requirements and design iteratively with business partners without formal requirements documentation,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

A Career at HARMAN - Harman Tech Solutions (HTS) Join our global, multi-disciplinary team at HARMAN HTS where we harness the power of technology to create innovative solutions and shape the future. We combine physical and digital elements to tackle challenges, making technology a dynamic force that serves humanity's needs. Empower our company to drive digital business models, explore new markets, and enhance customer experiences. As a QA professional with 3-6 years of experience, particularly in Big Data testing within Data Lake environments on Azure's cloud platform, you will play a crucial role in ensuring data integrity and quality. You will develop and execute test scripts to validate data pipelines, transformations, and integrations. Collaborate with development teams to update test cases and maintain data integrity. Design and run tests using Azure services like Data Lake, Synapse, and Data Factory, adhering to industry standards. Enhance automated tests for new features and participate in data reconciliation and Data Quality frameworks. You will need proficiency in Azure Data Factory, Azure Synapse Analytics, Databricks, SQL, and PySpark for data processing and testing. Experience in functional and system integration testing in big data environments is essential. Knowledge of Agile methodologies, Jira, and test case management tools is required. Your ability to design and execute test cases in a behavior-driven development environment will be key. You will have the opportunity to travel up to 25% for domestic and international purposes if required. Successful completion of a background investigation is necessary for employment. At HARMAN, we offer employee discounts on HARMAN/Samsung products, professional development through HARMAN University, flexible work schedules, an inclusive work environment, tuition reimbursement, and employee recognition programs. We are committed to creating a supportive culture where every employee is welcomed, valued, and empowered to share their ideas and unique perspectives. Join us at HARMAN, where innovation fuels next-level technology across automotive, lifestyle, and digital transformation solutions. Be part of a team that turns ordinary moments into extraordinary experiences and addresses the world's evolving needs. If you are ready to make a lasting impact through innovation, we invite you to join our talent community today.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

jaipur, rajasthan

On-site

As a Data Architect with over 8 years of experience, you will be responsible for designing and implementing scalable, high-performance data solutions. Your expertise in Databricks, Azure, AWS, and modern data technologies will be key in developing data lakes, data warehouses, and real-time streaming architectures. You will work on optimizing data pipelines, Delta Lake architectures, and advanced analytics solutions using Databricks. Additionally, you will be involved in developing and managing cloud-native data platforms on Azure and AWS, designing ETL/ELT pipelines, and working with big data processing tools like Apache Spark, PySpark, Scala, Hadoop, and Kafka. Your role will also include implementing data governance and security measures, performance optimization, and collaborating with engineering, analytics, and business teams to align data strategies with business goals. To excel in this position, you must have hands-on experience with Databricks, strong expertise in Azure and AWS data services, proficiency in SQL, Python, and Scala, experience with NoSQL databases and real-time data streaming, as well as knowledge of data governance best practices and CI/CD for data pipelines. Overall, your role as a Data Architect will require a combination of technical skills, problem-solving abilities, and effective communication to drive successful data solutions within the organization.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a PL SQL Developer with 5 to 7 years of experience, you will be based in Pune/hybrid with an immediate to 15 days notice period. You must possess expertise in languages such as SQL, T-SQL, PL/SQL, and Python libraries like PySpark, Pandas, NumPy, Matplotlib, and Seaborn, along with databases like SQL Server and Synapse. Your key responsibilities will include designing and maintaining efficient data pipelines and ETL processes using SQL and Python, writing optimized queries for data manipulation, using Python libraries for data processing and visualization, performing EOD data aggregation and reporting, working on Azure Synapse Analytics for scalable data transformations, monitoring and managing database performance, collaborating with cross-functional teams, ensuring secure data handling and compliance with organizational policies, and debugging Unix-based scripts. To be successful in this role, you should have a Bachelors/Masters degree in Computer Science, IT, or a related field, along with 5-8 years of hands-on experience in data engineering and analytics. You must have a solid understanding of database architecture, experience in end-of-day reporting setups, and familiarity with cloud-based analytics platforms. This is a full-time, permanent position with day shift schedule and in-person work location.,

Posted 2 weeks ago

Apply

5.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

As a Technical Architect at Crisil, a data-centric organization, you will be responsible for designing and building scalable, secure, and high-performance solutions capable of handling large data sets. You will collaborate with cross-functional teams to identify and prioritize technical requirements and develop solutions that meet those needs. Your key responsibilities will include designing and building solutions that can handle large data sets, working closely with cross-functional teams to prioritize technical requirements, reviewing and fine-tuning existing applications for optimal performance, scalability, and security, collaborating with the central architecture team to establish best coding practices, developing and maintaining technical roadmaps aligned with business objectives, evaluating and recommending new technologies for system improvement, leading large-scale projects from design to implementation, mentoring junior staff, and staying updated on emerging trends and technologies. To qualify for this role, you should have a Bachelor's degree in computer science, Engineering, or a related field, along with 10-15 years of experience in Software development, including at least 5 years in technical architecture or a related field. You should possess strong technical skills such as an understanding of software architecture principles, proficiency in Java programming and related technologies, knowledge of other programming languages, experience with cloud-based technologies, and familiarity with containerization using Docker and Kubernetes. In addition to technical skills, you should also demonstrate leadership qualities such as experience in leading technical teams and projects, excellent communication and collaboration skills, and the ability to influence and negotiate with stakeholders. Soft skills like problem-solving, attention to detail, adaptability, and effective written and verbal communication are also essential for this role. Nice to have qualifications include a Master's degree in a related field, knowledge of data analytics and visualization tools, experience with machine learning or artificial intelligence, certification in Java development or technical architecture like TOGAF, and familiarity with agile development methodologies. In return, Crisil offers a competitive salary and benefits package, opportunities for professional growth, a collaborative work environment, flexible working hours, access to cutting-edge technologies, and recognition for outstanding performance.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

The non-cloud QA Test Engineer position at our client, a leading technology services and consulting company, offers an exciting opportunity to work on building innovative solutions that cater to clients" complex digital transformation needs. With a holistic portfolio of consulting, design, engineering, and operations capabilities, we empower clients to achieve their boldest ambitions and develop future-ready, sustainable businesses. Our global presence with over 230,000 employees and business partners across 65 countries ensures that we deliver on our commitment to helping clients, colleagues, and communities thrive in a dynamic world. As a Non-cloud QA Test Engineer, you will be responsible for ensuring the quality and reliability of our solutions. The key skills required for this role include proficiency in SQL, ETL testing, and proven experience in developing solutions using at least one programming language. Data testing tasks such as ETL testing, data validation, transformation checks, and SQL-based testing will be an essential part of your responsibilities. Hands-on experience in Java or Python is crucial for this role. While the above-mentioned skills are a must-have, it would be advantageous if you have automation experience in the backend, along with scripting knowledge in Python, Java, Scala, or Pyspark. Familiarity with Databricks and Azure will be considered a plus. This position is based in Pune and offers a hybrid work mode under a contract employment type. The ideal candidate should have a minimum of 6 years of experience in the field. The notice period for this role ranges from immediate to 15 days. If you are interested and meet the requirements outlined above, please share your resume with barkavi@people-prime.com. Join us in our mission to drive digital transformation and innovation for our clients while creating a sustainable future for businesses worldwide.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a highly skilled and motivated Python, AWS, Big Data Engineer to join our data engineering team. The ideal candidate should have hands-on experience with the Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. Your responsibilities will include designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives. Virtusa is a company that values teamwork, quality of life, and professional and personal development. We are proud to have a team of 27,000 people globally who care about your growth and seek to provide you with exciting projects, opportunities, and work with state-of-the-art technologies throughout your career with us. At Virtusa, we believe in the potential of great minds coming together. We emphasize collaboration and a team environment, providing a dynamic place for talented individuals to nurture new ideas and strive for excellence.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Are you passionate about developing mission-critical, high-quality software solutions, using cutting-edge technology in a dynamic environment Compliance Engineering, a global team of over 300 engineers and scientists, is dedicated to working on the most complex, mission-critical problems. The team builds and operates a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. Leveraging the latest technology and vast amounts of structured and unstructured data, we use modern frameworks to build responsive and intuitive front-end and Big Data applications. As the firm invests significantly in uplifting and rebuilding the Compliance application portfolio, Compliance Engineering seeks to fill several Systems Engineer roles. As a member of our team, you will partner globally with users, development teams, and engineering colleagues across multiple divisions to facilitate the onboarding of new business initiatives and test and validate Compliance Surveillance coverage. You will have the opportunity to learn from experts, train and mentor team members, leverage various technologies including Java, Python, PySpark, and other Big Data technologies, innovate and incubate new ideas, and be involved in the full software development life cycle. A successful candidate will possess a Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study, expertise in Java development, debugging, and problem-solving, as well as experience in delivery or project management. The ability to clearly express ideas and arguments in meetings and on paper is essential. Experience in relational databases, Hadoop and Big Data technologies, knowledge of the financial industry (particularly in the Capital Markets domain), and compliance or risk functions is desired and can set you apart from other candidates. Goldman Sachs, a leading global investment banking, securities, and investment management firm founded in 1869 and headquartered in New York, is committed to fostering diversity and inclusion in the workplace and beyond. The firm provides numerous opportunities for professional and personal growth, from training and development to firmwide networks, benefits, wellness, personal finance offerings, and mindfulness programs. Learn more about the culture, benefits, and people at GS.com/careers. Goldman Sachs is an equal employment/affirmative action employer dedicated to finding reasonable accommodations for candidates with special needs or disabilities during the recruiting process.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As an AWS Data Engineer with over 5 years of experience, you will be working in Chennai (WFO) and will be expected to attend a Face to Face interview on 26 July (Saturday) during IST hours. Your preferred domain should be in Life Sciences / Pharma, which is good to have. Your mandate includes possessing key skill sets in AWS, Python, Databricks, Pyspark, and SQL. Your primary responsibilities will involve designing, building, and maintaining scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. You will also be responsible for integrating data from multiple sources, ensuring accurate transformation and storage in optimal formats such as Delta Lake, Redshift, and S3. Additionally, you will optimize data processing and storage systems for cost efficiency and high performance while managing compute resources and cluster configurations. Automation of data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks will be a crucial part of your role. Implementing data quality checks, validation rules, and transformation logic to guarantee accuracy, consistency, and reliability of data will also be essential. In terms of cloud platform management, you will manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Leading migrations from legacy data systems to modern cloud-based platforms and implementing cost optimization strategies will also be part of your responsibilities. Ensuring data security by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards will be vital. Lastly, collaborating with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks will be key to your success in this role.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

You are an experienced Databricks on AWS and PySpark Engineer looking to join our team. Your role will involve designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and PySpark. You will also be responsible for developing and optimizing data processing workflows, collaborating with data scientists and analysts, ensuring data quality, security, and compliance, troubleshooting data pipeline issues, and staying updated with industry trends in data engineering and big data. Your responsibilities will include: - Designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and PySpark - Developing and optimizing data processing workflows using PySpark and Databricks - Collaborating with data scientists and analysts to design and implement data models and architectures - Ensuring data quality, security, and compliance with industry standards and regulations - Troubleshooting and resolving data pipeline issues and optimizing performance - Staying up-to-date with industry trends and emerging technologies in data engineering and big data Requirements: - 3+ years of experience in data engineering, with a focus on Databricks on AWS and PySpark - Strong expertise in PySpark and Databricks, including data processing, data modeling, and data warehousing - Experience with AWS services such as S3, Glue, and IAM - Strong understanding of data engineering principles, including data pipelines, data governance, and data security - Experience with data processing workflows and data pipeline management Soft Skills: - Excellent problem-solving skills and attention to detail - Strong communication and collaboration skills - Ability to work in a fast-paced, dynamic environment - Ability to adapt to changing requirements and priorities If you are a proactive and skilled professional with a passion for data engineering and a strong background in Databricks on AWS and PySpark, we encourage you to apply for this opportunity.,

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySpark Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to guarantee the quality of the applications you create, while continuously seeking ways to enhance functionality and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Conduct thorough testing and debugging of applications to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with PySpark, Scala. - Strong understanding of data integration and ETL processes. - Familiarity with cloud computing concepts and services. - Experience in application lifecycle management and agile methodologies. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Notice 30 days to immediate Experience Required: 8+ years in data engineering and software development Job Description: We are seeking a Lead Data Engineer with strong expertise in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL to drive complex data solutions across multi-functional teams. The ideal candidate will have hands-on experience with data modeling, test-driven development, and Agile/Waterfall methodologies. You’ll lead initiatives, collaborate across teams, and translate business needs into scalable data solutions using best practices in managed services or staff augmentation environments.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies