Jobs
Interviews

921 Sqoop Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

1Airflow with Pyspark Emphasize expertise in designing, developing, and deploying data pipelines using Apache Airflow. The focus is on creating, managing, and monitoring workflows, ensuring data quality, and collaborating with other data teams.

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Pune

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 month ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Seeking a skilled Data Engineer to work on cloud-based data pipelines and analytics platforms. The ideal candidate will have hands-on experience in PySpark and AWS, with proficiency in designing Data Lakes and working with modern data orchestration tools. Data Engineer to work on cloud-based data pipelines and analytics platforms PySpark and AWS, with proficiency in designing Data Lakes working with modern data orchestration tools

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Mumbai

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Mumbai

Work from Office

Sr Devloper with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Mumbai

Work from Office

4+ years of experience as a Data Engineer or similar role. Proficiency in Python, PySpark, and advanced SQL. Hands-on experience with big data tools and frameworks (e.g., Spark, Hive). Experience with cloud data platforms like AWS, Azure, or GCP is a plus. Solid understanding of data modeling, warehousing, and ETL processes. Strong problem-solving and analytical skills. Good communication and teamwork abilities.Design, build, and maintain data pipelines that collect, process, and store data from various sources. Integrate data from multiple heterogeneous sources such as databases (SQL/NoSQL), APIs, cloud storage, and flat files. Optimize data processing tasks to improve execution efficiency, reduce costs, and minimize processing times, especially when working with large-scale datasets in Spark. Design and implement data warehousing solutions that centralize data from multiple sources for analysis.

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Combine interface design concepts with digital design and establish milestones to encourage cooperation and teamwork. Develop overall concepts for improving the user experience within a business webpage or product, ensuring all interactions are intuitive and convenient for customers. Collaborate with back-end web developers and programmers to improve usability. Conduct thorough testing of user interfaces in multiple platforms to ensure all designs render correctly and systems function properly. Converting the jobs from Talend ETL to Python and convert Lead SQLS to Snowflake. Developers with Python and SQL Skills. Developers should be proficient in Python (especially Pandas, PySpark, or Dask) for ETL scripting, with strong SQL skills to translate complex queries. They need expertise in Snowflake SQL for migrating and optimizing queries, as well as experience with data pipeline orchestration (e.g., Airflow) and cloud integration for automation and data loading. Familiarity with data transformation, error handling, and logging is also essential.

Posted 1 month ago

Apply

8.0 - 13.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance.

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Seeking a skilled Data Engineer to work on cloud-based data pipelines and analytics platforms. The ideal candidate will have hands-on experience in PySpark and AWS, with proficiency in designing Data Lakes and working with modern data orchestration tools. Data Engineer to work on cloud-based data pipelines and analytics platforms PySpark and AWS, with proficiency in designing Data Lakes working with modern data orchestration tools

Posted 1 month ago

Apply

5.0 - 7.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxIntuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Pune

Work from Office

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands,Experience in Concurrent design and multi-threading Preferred technical and professional experience None

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Chennai

Work from Office

As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Spring Boot, Java2/EE, Microsservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred technical and professional experience None

Posted 1 month ago

Apply

2.0 - 7.0 years

9 - 13 Lacs

Bengaluru

Work from Office

As an IBM Maximo Consultant, you would be responsible for Be a trusted Advisor of the Customer on IBM Maximo. Have the ability to influence decision makers on technical feasibility. Implement solutions within defined architectural guardrails, principles and framework. Work on IBM Maximo – configurations, workflows etc. Provide development support, deployment support, post go-live support Responsible for IBM Maximo integration across systems within customer landscape Lead or/and support migration of legacy Maximo systems (e.g., 7.6.x) to Maximo Application Suite (MAS) on OpenShift. Implement MAS Mobile solutions tailored to client-specific use cases as per Defined Design specifications Deploy MAS on Red Hat OpenShift Container Platform (ROCP) – on-premise or cloud (AWS, Azure, IBM Cloud, etc.). Perform installation, configuration, and troubleshooting of MAS components including MAS Core, MAS Manage, MAS Mobile, MAS Monitor, Predict, Health etc. Configure and integrate MAS with third-party tools, ERP systems, and enterprise identity providers (LDAP, SSO, etc.). Provide post-deployment support and knowledge transfer to client IT and support teams. Prepare deployment documentation, configuration guides, and operational runbooks. Stay current in terms of Knowledge of latest IBM MAS Releases Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Application Experience4 years of experience inIBM Maximo EAMincluding system administration, customization, and deployment. 2 years of experience on IBM MAS (Maximo Application Suite), covering one or more of MAS offerings (MAS Manage, MAS Core, MAS Mobile, MAS Monitor, Predict, Health etc). 1+ years of experience on Migrating deployments from EAM/Maximo to MAS. Technical Experience: Strong Programming skills Java, Python, Object-Oriented Programming, Javascript, API design). Proficient in SQL, cron tasks. Understanding of DB side of concepts specifically to migration from any source DB to DB2. Cloud Deployment Experience: Strong understanding and experience of Cloud Deployment architectures. Knowledge and hands on experience ofRed Hat OpenShift, Docker, Kubernetes, and Helm Charts. Working experience of deployments on various cloud platforms:IBM Cloud, AWS, Azure, or GCP. Self-Motivated Problem Solver: Demonstrating a natural bias towards self-motivation, curiosity, initiative in addition to navigating data and people to find answers and present solutions. Collaboration and Communication Strong collaboration and communication skills as you work across the client, partner, and IBM team. Preferred technical and professional experience IBM Maximo Certified Deployment Consultant (V7.6 and/or V7.5) Hypervisors:Working knowledge of Hypervisors such as VM Ware and Virtualization management and administration technologies MAS Mobile implementation experience

Posted 1 month ago

Apply

5.0 - 7.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxIntuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 month ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Neo4j, Stardog Good to have skills : JavaMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Neo4j.- Good To Have Skills: Experience with Java.- Strong understanding of data modeling and graph database concepts.- Experience with data integration tools and ETL processes.- Familiarity with data quality frameworks and best practices.- Proficient in programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Neo4j.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processing workflows to enhance efficiency and performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data pipeline architecture and design.- Experience with ETL processes and data integration techniques.- Familiarity with data quality frameworks and best practices. Additional Information:- The candidate should have minimum 5 years of experience in Apache Spark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

0.0 - 1.0 years

2 - 3 Lacs

Noida

Work from Office

As an intern, you will play a key role in supporting our data operations by handling Level 1 (L1) alert monitoring for both ingestion and analytics pipelines. Youll be responsible for performing L1 troubleshooting on assigned ingestion and analytics tasks as part of our Business-As-Usual (BAU) activities. This role also involves cross-collaboration with multiple teams to ensure timely resolution of issues and maintaining the smooth functioning of data workflows. Its a great opportunity to gain hands-on experience in real-time monitoring, issue triaging, and inter-team coordination in a production environment. A Day in the life Create world class customer facing documentation which would delight and excite customers Remove ambiguity in understanding things by documenting things and hence making the teams more efficient and effective Convert tacit knowledge to implicit knowledge Handling L1 alert monitoring of ingestions and analytics. Doing L1 troubleshooting for issues in assigned ingestion/analytics tasks (BAUs). Cross collaborating with other teams to get issues resolved Adhere to JIRA processes to avoid SLA breaches Analysis and resolution of Low priority customer tickets What You Need Basic knowledge of SQL and able to write to SQL queries An ambitious person who can work in a flexible startup environment with only one thing in mind - getting things done. Excellent written and verbal communication skills Comfortable to work in weekend, night and rotational shifts Preferred Skills: SQL/ ETL / Python support, Support Processes (SLAs, OLAs, Product or application support) Data Ingestion, Analytics, Power BI

Posted 1 month ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. You will also monitor and optimize existing data processes to enhance performance and reliability, making data accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with data architects and analysts to design data models that meet business needs.- Develop and maintain documentation for data processes and workflows to ensure clarity and compliance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data processing frameworks and methodologies.- Experience in building and optimizing data pipelines for performance and scalability.- Familiarity with data warehousing concepts and best practices. Additional Information:- The candidate should have minimum 3 years of experience in Apache Spark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Hyderabad

Work from Office

Urgent Requirement for Big Data, Notice Period Immediate Location Hyderabad/Pune Employment Type C2H Primary Skills 6-8yrs of Experience in working as bigdata developer/supporting environemnts Strong knowledge in Unix/BigData Scripting Strong understanding of BigData (CDP/Hive) Environment Hands-on with GitHub and CI-CD implementations. Attitude to learn understand ever task doing with reason Ability to work independently on specialized assignments within the context of project deliverable Take ownership of providing solutions and tools that iteratively increase engineering efficiencies. Excellent communication Skills & team player Good to have hadoop, Control-M Tooling knowledge. Good to have Automation experience, knowledge of any Monitoring Tools. Role You will work with team handling application developed using Hadoop/CDP, Hive. You will work within the Data Engineering team and with the Lead Hadoop Data Engineer and Product Owner. You are expected to support existing application as well as design and build new Data Pipelines. You are expected to support Evergreening or upgrade activities of CDP/SAS/Hive You are expected to participate in the service management if application Support issue resolution and improve processing performanceavoid issue reoccurring Ensure the use of Hive, Unix Scripting, Control-M reduces lead time to delivery Support application in UK shift as well as on-call support over night/weekend This is mandatory

Posted 1 month ago

Apply

0.0 years

6 - 9 Lacs

Hyderābād

On-site

Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities and Tasks: Understand the Business Problem and the Relevant Data Maintain an intimate understanding of company and department strategy Translate analysis requirements into data requirements Identify and understand the data sources that are relevant to the business problem Develop conceptual models that capture the relationships within the data Define the data-quality objectives for the solution Be a subject matter expert in data sources and reporting options Architect Data Management Systems: Design and implement optimum data structures in the appropriate data management system (Hadoop, Teradata, SQL Server, etc.) to satisfy the data requirements Plan methods for archiving/deletion of information Develop, Automate, and Orchestrate an Ecosystem of ETL Processes for Varying Volumes of Data. Identify and select the optimum methods of access for each data source (real-time/streaming, delayed, static) Determine transformation requirements and develop processes to bring structured and unstructured data from the source to a new physical data model Develop processes to efficiently load the transform data into the data management system Prepare Data to Meet Analysis Requirements: Work with the data scientist to implement strategies for cleaning and preparing data for analysis (e.g., outliers, missing data, etc.) Develop and code data extracts Follow standard methodologies to ensure data quality and data integrity Ensure that the data is fit to use for data science applications Qualifications and Experience: 0-7 years of experience developing, delivering, and/or supporting data engineering, advanced analytics or business intelligence solutions Ability to work with multiple operating systems (e.g., MS Office, Unix, Linux, etc.) Experienced in developing ETL/ELT processes using Apache Ni-Fi and Snowflake Significant experience with big data processing and/or developing applications and data sources via Hadoop, Yarn, Hive, Pig, Sqoop, MapReduce, HBASE, Flume, etc. Understanding of how distributed systems work Familiarity with software architecture (data structures, data schemas, etc.) Strong working knowledge of databases (Oracle, MSSQL, etc.) including SQL and NoSQL. Strong mathematics background, analytical, problem solving, and organizational skills Strong communication skills (written, verbal and presentation) Experience working in a global, multi-functional environment Minimum of 2 years’ experience in any of the following: At least one high-level client, object-oriented language (e.g., C#, C++, JAVA, Python, Perl, etc.); at least one or more web programming language (PHP, MySQL, Python, Perl, JavaScript, ASP, etc.); one or more Data Extraction Tools (SSIS, Informatica etc.) Software development. Ability to travel as needed Education: B.S. degree in Computer Science, Software Engineering, Electrical Engineering, Applied Mathematics or related field of study. M.S. degree preferred. About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 14 Lacs

Telangana

Work from Office

Urgent Requirement for Collibra Experience 6+ Years Location Pune JD Primary Collibra , with experience in dealing with RDBMS data bases like Oracle, Postgres SQL. Secondary Experience in Azure PaaS services with the ability to integrate Collibra DQ workflows with Azure data pipelines, using APIs and batch jobs.

Posted 1 month ago

Apply

3.0 years

8 - 10 Lacs

Gurgaon

Remote

Job description About this role Aladdin Engineering is seeking a talented, hands-on Data Engineer to join its Regulatory Tech team. The Regulatory Tech team provides a comprehensive surveillance solution for Compliance that helps the firm protect itself against market manipulation, fraud and other financial related misconducts. Our product is widely used in the firm and is going through a series of feature buildouts so that it can be offered to external clients. We are facing a lot of potential and exciting times ahead. As a team, we nurture and develop a culture that is: Curious: We like to learn new things and have a healthy disrespect for the status quo Brave: We are willing to get outside your comfort zone Passionate: We feel personal ownership of your work, and strive to be better Open: We value and respect other's opinions Innovative: We conceptualize, design and implement new capabilities to ensure that Aladdin remains the best platform. We are seeking an ambitious professional having strong technical experience in data engineering. You have a solid understanding of the software development lifecycle and enjoy working in a team of engineers. The ideal candidate shows aptitude to evaluate and incorporate new technologies. You thrive in a work environment that requires creative problem-solving skills, independent self-direction, open communication and attention to details. You are a self-starter, comfortable with ambiguity and working in a fast-paced, ever-changing environment. You are passionate about bringing value to clients. As member of the Regulatory Tech team, you will: Work with engineers, project managers, technical leads, business owners and analysts throughout the whole SDLC Design and implement new features in our core product’s data platform and suspicious activity identifying mechanism Be brave enough to come up with ideas to improve resiliency, stability and performance of our platform Participate in setting coding standards and guidelines, identify and document standard methodologies Desired Skills and Experience: 3+ years of hands-on experience with Python and SQL Experience with Snowflake database Experience with Airflow Thorough knowledge of GIT, CI/CD and unit/end-to-end testing Interest in data engineering Solid written and verbal communication skills Nice to have: Experience with DBT, Great Expectations frameworks Experience with Big Data technologies (Spark, Sqoop, HDFS, YARN) Experience with Agile development Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R255049

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Hyderabad

Work from Office

Immediate Openings on Big data engineer/Developer _ Pan India_Contract Experience 5+ Years Skills Big data engineer/Developer Location Pan India Notice Period Immediate . Employment Type Contract Working Mode Hybrid Big data engineer/Developer Spark-Scala HQL, Hive Control-m Jinkins Git Technical analysis and up to some extent business analysis (knowledge about banking products, credit cards and its transactions)

Posted 1 month ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Hyderabad

Hybrid

Immediate Openings on ITSS- Senior Azure Developer / Data Engineer _ Bangalore_Contract Experience: 5+ Years Skill: ITSS- Senior Azure Developer / Data Engineer Location: Bangalore Notice Period: Immediate . Employment Type: Contract Working Mode : Hybrid Job Description Description - Senior Azure Developer Loc - Bangalore , Hyd, Chennai and Noida Role: Data Engineer Experience: Mid-Level Primary Skillsets: Azure (ADF/ADLS/Key vaults) Secondary Skillsets: Databricks Good to have Skillsets: Ability to communicate well Experience in cloud applications, especially in Azure (basically things covered in Primary skillsets listed below) Work in agile framework Experience in ETL, SQL, PySpark Run with a task without waiting for direction all the time Experience with git repository and release pipelines. Any certifications on Azure Any certification on Databricks will be a topping on a cake

Posted 1 month ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Telangana

Work from Office

Education Bachelors degree in Computer Science, Engineering, or a related field. A Masters degree is preferred. Experience Minimum of 4+ years of experience in data engineering or a similar role. Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pnadas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies