Home
Jobs

339 Mapreduce Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: โ‚น0
Max: โ‚น10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

India

Remote

Linkedin logo

๐Ÿข Company: Natlov Technologies Pvt Ltd ๐Ÿ•’ Experience Required: 1โ€“2 Years ๐ŸŒ Location: Remote (India-based candidates preferred) ๐Ÿง  About the Role: We are seeking passionate Data Engineers with hands-on experience in building scalable, distributed data systems and high-volume transaction applications. Join us to work with modern Big Data technologies and cloud platforms to architect, stream, and analyze data efficiently. ๐Ÿ› ๏ธ What Weโ€™re Looking For (Experience: 1โ€“2 Years): ๐Ÿ”น Strong hands-on programming experience in Scala , Python , and other object-oriented languages ๐Ÿ”น Experience in building distributed/scalable systems and high-volume transaction applications ๐Ÿ”น Solid understanding of Big Data technologies: โ€ข Apache Spark (Structured & Real-Time Streaming) โ€ข Apache Kafka โ€ข Delta Lake ๐Ÿ”น Experience with ETL workflows using MapReduce , Spark , and Hadoop ๐Ÿ”น Proficiency in SQL querying and SQL Server Management Studio (SSMS) ๐Ÿ”น Experience with Snowflake or Databricks ๐Ÿ”น Dashboarding and reporting using Power BI ๐Ÿ”น Familiarity with Kafka , Zookeeper , and YARN for ingestion and orchestration ๐Ÿ”น Strong analytical and problem-solving skills ๐Ÿ”น Energetic, motivated, and eager to learn and grow in a collaborative team environment ๐Ÿ“ Work Mode: Remote ๐Ÿ“ฉ How to Apply: Send your resume to techhr@natlov.com Be a part of a passionate and forward-thinking team at Natlov Technologies Pvt Ltd , where we're redefining how data is architected, streamed, analyzed, and delivered. Letโ€™s build the future of data together! ๐Ÿ’ผ #DataEngineer #BigData #ApacheSpark #Kafka #DeltaLake #SQL #PowerBI #Databricks #Snowflake #ETL #Python #Scala #SSMS #HiringNow #NatlovTechnologies #1to2YearsExperience #TechJobs #CareerOpportunity #RemoteJobs Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About PhonePe Group: PhonePe is Indiaโ€™s leading digital payments company with 50 crore (500 Million) registered users and 3.7 crore (37 Million) merchants covering over 99% of the postal codes across India. On the back of its leadership in digital payments, PhonePe has expanded into financial services (Insurance, Mutual Funds, Stock Broking, and Lending) as well as adjacent tech-enabled businesses such as Pincode for hyperlocal shopping and Indus App Store which is India's first localized App Store. The PhonePe Group is a portfolio of businesses aligned with the company's vision to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. Culture At PhonePe, we take extra care to make sure you give your best at work, Everyday! And creating the right environment for you is just one of the things we do. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. Being enthusiastic about tech is a big part of being at PhonePe. If you like building technology that impacts millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! Job Overview: As a Site Reliability Engineer (SRE) specializing in DataPlatform OnPremise, you will play a critical role in deployment, ensuring the reliability, scalability, and performance of our Cloudera Data Platform (CDP) infrastructure. You will collaborate closely with cross-functional teams to design, implement, and maintain robust systems that support our data-driven initiatives. The ideal candidate will have a deep understanding of Data Platform, strong troubleshooting skills, and a proactive mindset towards automation and optimization.You will play a pivotal role in ensuring the smooth functioning, operation, performance and security of large high density Cloudera-based infrastructure. Roles and Responsibilities: Work on tasks related to implementation of Cloudera Data Platform Cloudera Data Platform on-premises and be a part of planning, installation, configuration, and integration with existing systems. Infrastructure Management: Manage and maintain the Cloudera-based infrastructure, ensuring optimal performance, high availability, and scalability. This includes monitoring system health, and performing routine maintenance tasks. Strong troubleshooting skills and operational expertise in areas such as system capacity, bottlenecks, memory, CPU, OS, storage, and networking. Creating Runbooks and automating them using scripting tools like Shell scripting, Python etc. Working knowledge with any of the configuration management tools like Terraform, Ansible or SALT Data Security and Compliance: Implement and enforce security best practices to safeguard data integrity and confidentiality within the Cloudera environment. Ensure compliance with relevant regulations and standards (e.g., GDPR, HIPAA, DPR). Performance Optimization: Continuously optimize the Cloudera infrastructure to enhance performance, efficiency, and cost-effectiveness. Identify and resolve bottlenecks, tune configurations, and implement best practices for resource utilization. Capacity Planning: Planning and performance tuning of Hadoop clusters, Monitor resource utilization trends and plan for future capacity needs. Proactively identify potential capacity constraints and propose solutions to address them. Collaborate effectively with infrastructure, network, database, application, and business intelligence teams to ensure high data quality and availability. Work closely with teams to optimize the overall performance of the PhonePe Hadoop ecosystem. Backup and Disaster Recovery: Implement robust backup and disaster recovery strategies to ensure data protection and business continuity. Test and maintain backup and recovery procedures regularly. Develop tools and services to enhance debuggability and supportability. Patches & Upgrades: Routinely apply recommended patches and perform rolling upgrades of the platform in accordance with the advisory from Cloudera, InfoSec and Compliance. Documentation and Knowledge Sharing: Create comprehensive documentation for configurations, processes, and procedures related to the Cloudera Data Platform. Share knowledge and best practices with team members to foster continuous learning and improvement. Collaboration and Communication: Collaborate effectively with cross-functional teams including data engineers, developers, and IT operations personnel. Communicate project status, issues, and resolutions clearly and promptly. Skills Required: Bachelor's degree in Computer Science, Engineering, or related field. Proficiency in Linux system administration, shell scripting, and networking concepts including IPtables, and IPsec. Strong understanding of networking, open-source technologies, and tools. 3-5 years of experience in the design, set up, and management of large-scale Hadoop clusters, ensuring high availability, fault tolerance, and performance optimization. Strong understanding of distributed computing principles and experience with Hadoop ecosystem technologies (HDFS, MapReduce, YARN, Hive, Spark, etc.). Experience with Kerberos and LDAP. Strong Knowledge of databases like Mysql,Nosql,Sql server Hands-on experience with configuration management tools (e.g., Salt,Ansible, Puppet, Chef). Strong scripting skills (e.g., PERL,Python, Bash) for automation and troubleshooting. Experience with monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack). Knowledge of networking principles and protocols (TCP/IP, UDP, DNS, DHCP, etc.). Experience with managing *nix based machines and strong working knowledge of quintessential Unix programs and tools (e.g. Ubuntu, Fedora, Redhat, etc.) Excellent communication skills and the ability to collaborate effectively with cross-functional teams. Excellent analytical, problem-solving, and troubleshooting skills.. Proven ability to work well under pressure and manage multiple priorities simultaneously. Good To Have: Cloudera Certified Administrator (CCA) or Cloudera Certified Professional (CCP) certification preferred. Minimum 2 years of experience in managing and administering medium/large hadoop based environments (>100 machines), including Cloudera Data Platform (CDP) experience is highly desirable. Familiarity with Open Data Lake components such as Ozone, Iceberg, Spark, Flink, etc. Familiarity with containerization and orchestration technologies (e.g. Docker, Kubernetes, OpenShift) is a plus Design,develop and maintain Airflow DAGs and tasks to automate BAU processes,ensuring they are robust,scalable and efficient. PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Working at PhonePe is a rewarding experience! Great people, a work environment that thrives on creativity, the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. Read more about PhonePe on our blog. Life at PhonePe PhonePe in the news Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Chennai

On-site

The Data Science Analyst 2 is a developing professional role. Applies specialty area knowledge in monitoring, assessing, analyzing and/or evaluating processes and data. Identifies policy gaps and formulates policies. Interprets data and makes recommendations. Researches and interprets factual information. Identifies inconsistencies in data or results, defines business issues and formulates recommendations on policies, procedures or practices. Integrates established disciplinary knowledge within own specialty area with basic understanding of related industry practices. Good understanding of how the team interacts with others in accomplishing the objectives of the area. Develops working knowledge of industry practices and standards. Limited but direct impact on the business through the quality of the tasks/services provided. Impact of the job holder is restricted to own team. Responsibilities: The Data Engineer is responsible for building Data Engineering Solutions using next generation data techniques. The individual will be working with tech leads, product owners, customers and technologists to deliver data products/solutions in a collaborative and agile environment. Responsible for design and development of big data solutions. Partner with domain experts, product managers, analyst, and data scientists to develop Big Data pipelines in Hadoop Responsible for moving all legacy workloads to cloud platform Work with data scientist to build Client pipelines using heterogeneous sources and provide engineering services for data science applications Ensure automation through CI/CD across platforms both in cloud and on-premises Define needs around maintainability, testability, performance, security, quality and usability for data platform Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes Convert SAS based pipelines into languages like PySpark, Scala to execute on Hadoop, Snowflake and non-Hadoop ecosystems Tune Big data applications on Hadoop, Cloud and non-Hadoop platforms for optimal performance Applies in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinates and contributes to the objectives of the entire function. Produces detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 2-4 years of total IT experience Experience with Hadoop (Cloudera)/big data technologies /Cloud/AI tools Hands-on experience with HDFS, MapReduce, Hive, Impala, Spark, Kafka, Kudu, Kubernetes, Dashboard tools, Snowflake builts, AWS tools, AI/ML libraries and tools, etc) Experience on designing and developing Data Pipelines for Data Ingestion or Transformation. System level understanding - Data structures, algorithms, distributed storage & compute tools, SQL expertise, Shell scripting, Schedule tools, Scrum/Agile methodologies. Can-do attitude on solving complex business problems, good interpersonal and teamwork skills Education: Bachelorโ€™s/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Data Science - Time Type: Full time - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citiโ€™s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

6.0 years

6 Lacs

Noida

On-site

About the Role: This position requires someone to work on complex technical projects and closely work with peers in an innovative and fast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Requirements: Minimum 6-8 years of experience in Big Data technologies. The position Grow our analytics capabilities with faster, more reliable tools, handling petabytes of data every day. Brainstorm and create new platforms that can help in our quest to make available to cluster users in all shapes and forms, with low latency and horizontal scalability. Make changes to our diagnosing any problems across the entire technical stack. Design and develop a real-time events pipeline for Data ingestion for real-time dash- boarding. Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake. Design & implement new components and various emerging technologies in Hadoop Eco- System, and successful execution of various projects. Be a brand ambassador for Paytm โ€“ Stay Hungry, Stay Humble, Stay Relevant! Preferred Qualification : Bachelor's/Master's Degree in Computer Science or equivalent Skills that will help you succeed in this role: Fluent with Strong hands-on experience with Hadoop, MapReduce, Hive, Spark, PySpark etc. Excellent programming/debugging skills in Python/Java/Scala. Experience with any scripting language such as Python, Bash etc. Good to have experience of working with noSQL databases like Hbase, Cassandra. Hands-on programming experience with multithreaded applications. Good to have experience in Database, SQL, messaging queues like Kafka. Good to have experience in developing streaming applications e.g. Spark Streaming, Flink, Storm, etc. Good to have experience with AWS and cloud technologies such as S3Experience with caching architectures like Redis etc. Why join us: Because you get an opportunity to make a difference, and have a great time doing that. You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve. You should work with us if you think seriously about what technology can do for people. We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. To know more about exiting work we do:https://paytm.com/blog/engineering/ Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants โ€“ and we are committed to it. Indiaโ€™s largest digital lending story is brewing here. Itโ€™s your opportunity to be a part of the story!

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

LinkedIn is the worldโ€™s largest professional network, built to create economic opportunity for every member of the global workforce. Our products help people make powerful connections, discover exciting opportunities, build necessary skills, and gain valuable insights every day. Weโ€™re also committed to providing transformational opportunities for our own employees by investing in their growth. We aspire to create a culture thatโ€™s built on trust, care, inclusion, and fun, where everyone can succeed. Join us to transform the way the world works. This role will be based in Bangalore, India. At LinkedIn, we trust each other to do our best work where it works best for us and our teams. This role offers a hybrid work option, meaning you can both work from home and commute to a LinkedIn office, depending on whatโ€™s best for you and when it is important for your team to be together. As part of our world-class software engineering team, you will be charged with building the next-generation infrastructure and platforms for LinkedIn, including but not limited to: an application and service delivery platform, massively scalable data storage and replication systems, cutting-edge search platform, best-in-class AI platform, experimentation platform, privacy and compliance platform etc. You will work and learn among the best, putting to use your passion for distributed technologies and algorithms, API design and systems-design, and your passion for writing code that performs at an extreme scale. LinkedIn has already pioneered well-known open-source infrastructure projects like Apache Kafka, Pinot, Azkaban, Samza, Venice, Datahub, Feather, etc. We also work with industry standard open source infrastructure products like Kubernetes, GRPC and GraphQL - come join our infrastructure teams and share the knowledge with a broader community while making a real impact within our company. Responsibilities: - You will own the technical strategy for broad or complex requirements with insightful and forward-looking approaches that go beyond the direct team and solve large open-ended problems. - You will design, implement, and optimize the performance of large-scale distributed systems with security and compliance in mind. - You will Improve the observability and understandability of various systems with a focus on improving developer productivity and system sustenance - You will effectively communicate with the team, partners and stakeholders. - You will mentor other engineers, define our challenging technical culture, and help to build a fast-growing team - You will work closely with the open-source community to participate and influence cutting edge open-source projects (e.g., Apache Iceberg) - You will deliver incremental impact by driving innovation while iteratively building and shipping software at scale - You will diagnose technical problems, debug in production environments, and automate routine tasks Basic Qualifications: - BA/BS Degree in Computer Science or related technical discipline, or related practical experience. - 8+ years of industry experience in software design, development, and algorithm related solutions. - 8+ years experience programming in object-oriented languages such as Java, Python, Go and/or Functional languages such as Scala or other relevant coding languages - Hands on experience developing distributed systems, large-scale systems, databases and/or Backend APIs Preferred Qualifications: - Experience with Hadoop (or similar) Ecosystem (Gobblin, Kafka, Iceberg, ORC, MapReduce, Yarn, HDFS, Hive, Spark, Presto) - Experience with industry, open-source projects and/or academic research in data management, relational databases, and/or large-data, parallel and distributed systems - Experience in architecting, building, and running large-scale systems - Experience with open-source project management and governance Suggested Skills: - Distributed systems - Backend Systems Infrastructure - Java You will Benefit from our Culture: We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels. India Disability Policy LinkedIn is an equal employment opportunity employer offering opportunities to all job seekers, including individuals with disabilities. For more information on our equal opportunity policy, please visit https://legal.linkedin.com/content/dam/legal/Policy_India_EqualOppPWD_9-12-2023.pdf Global Data Privacy Notice for Job Candidates This document provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: https://legal.linkedin.com/candidate-portal Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

The Data Science Analyst 2 is a developing professional role. Applies specialty area knowledge in monitoring, assessing, analyzing and/or evaluating processes and data. Identifies policy gaps and formulates policies. Interprets data and makes recommendations. Researches and interprets factual information. Identifies inconsistencies in data or results, defines business issues and formulates recommendations on policies, procedures or practices. Integrates established disciplinary knowledge within own specialty area with basic understanding of related industry practices. Good understanding of how the team interacts with others in accomplishing the objectives of the area. Develops working knowledge of industry practices and standards. Limited but direct impact on the business through the quality of the tasks/services provided. Impact of the job holder is restricted to own team. Responsibilities: The Data Engineer is responsible for building Data Engineering Solutions using next generation data techniques. The individual will be working with tech leads, product owners, customers and technologists to deliver data products/solutions in a collaborative and agile environment. Responsible for design and development of big data solutions. Partner with domain experts, product managers, analyst, and data scientists to develop Big Data pipelines in Hadoop Responsible for moving all legacy workloads to cloud platform Work with data scientist to build Client pipelines using heterogeneous sources and provide engineering services for data science applications Ensure automation through CI/CD across platforms both in cloud and on-premises Define needs around maintainability, testability, performance, security, quality and usability for data platform Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes Convert SAS based pipelines into languages like PySpark, Scala to execute on Hadoop, Snowflake and non-Hadoop ecosystems Tune Big data applications on Hadoop, Cloud and non-Hadoop platforms for optimal performance Applies in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinates and contributes to the objectives of the entire function. Produces detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 2-4 years of total IT experience Experience with Hadoop (Cloudera)/big data technologies /Cloud/AI tools Hands-on experience with HDFS, MapReduce, Hive, Impala, Spark, Kafka, Kudu, Kubernetes, Dashboard tools, Snowflake builts, AWS tools, AI/ML libraries and tools, etc) Experience on designing and developing Data Pipelines for Data Ingestion or Transformation. System level understanding - Data structures, algorithms, distributed storage & compute tools, SQL expertise, Shell scripting, Schedule tools, Scrum/Agile methodologies. Can-do attitude on solving complex business problems, good interpersonal and teamwork skills Education: Bachelorโ€™s/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Data Science ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citiโ€™s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

2 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Fusion Plus Solutions Inc is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Tech Stalwart Solution Private Limited is looking for Sr. Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

13 - 18 Lacs

Pune

Work from Office

Naukri logo

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here youโ€™ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systemsโ€”the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZSโ€™s Platform Developmentteam designs, implements, tests and supports ZSโ€™s ZAIDYN Platform which helps drive superior customer experiences and revenue outcomes through integrated products & analytics. Whether writing distributed optimization algorithms or advanced mapping and visualization interfaces, you will have an opportunity to solve challenging problems, make an immediate impact and contribute to bring better health outcomes. What you'll do : Pair program, write unit tests, lead code reviews, and collaborate with QA analysts to ensure you develop the highest quality multi-tenant software that can be productized As part of our full-stack product engineering team, you will build multi-tenant cloud-based software products/platforms and internal assets that will leverage cutting edge based on the Amazon AWS cloud platform. Work with junior developers to implement large features that are on the cutting edge of Big Data Be a technical leader to your team, and help them improve their technical skills Stand up for engineering practices that ensure quality productsautomated testing, unit testing, agile development, continuous integration, code reviews, and technical design Work with product managers and architects to design product architecture and to work on POCs Take immediate responsibility for project deliverables Understand client business issues and design features that meet client needs Undergo on-the-job and formal trainings and certifications, and will constantly advance your knowledge and problem solving skills What you'll bring : Bachelor's Degree in CS, IT, or related discipline Strong analytic, problem solving, and programming ability Experience in coding in an object-oriented language such as Python, Java, C# etc. Hands on experience on Apache Spark, EMR, Hadoop, HDFS, or other big data technologies Experience with development on the AWS (Amazon Web Services) platform is preferable Experience in Linux shell or PowerShell scripting is preferable Experience in HTML5, JavaScript, and JavaScript libraries is preferable Understanding to Data Science Algorithms God to have Pharma domain understanding Initiative and drive to contribute Excellent organizational and task management skills Strong communication skills Ability to work in global cross-office teams ZS is a global firm; fluency in English is required Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Linkedin logo

Job Requirements Role/ Job Title: Data Engineer - Gen AI Function/ Department: Data & Analytics Place of Work: Mumbai Job Purpose The data engineer will be working with our data scientists who are building solutions in domain of text, audio and images and tabular data. They will be responsible for working with large volumes of structured and unstructured data in its storage, retrieval and augmentation. Job & Responsibilities Build data engineering pipeline focused on unstructured data pipelines Conduct requirements gathering and project scoping sessions with subject matter experts, business users, and executive stakeholders to discover and define business data needs. Design, build, and optimize the data architecture and extract, transform, and load (ETL) pipelines to make them accessible for Data Scientists and the products built by them. Work on end-to-end data lifecycle from Data Ingestion, Data Transformation and Data Consumption layer, versed with API and its usability Drive the highest standards in data reliability, data integrity, and data governance, enabling accurate, consistent, and trustworthy data sets A suitable candidate will also demonstrate experience with big data infrastructure inclusive of MapReduce, Hive, HDFS, YARN, HBase, MongoDB, DynamoDB, etc. Creating Technical Design Documentation of the projects/pipelines Good skills in technical debugging of the code in case of issues. Also, working with git for code versioning Education Qualification Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA Experience Range : 5-10 years of relevant experience Show more Show less

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience: 5-10years Location: Hyderabad/Chennai Must-Have** (Ideally should not be more than 3-5): 1.In depth knowledge on PySpark such as reading data from external sources, merging data, performing data enrichment and loading into target data destinations 2. In depth knowledge of developing, training & deploying ML Models 3. Knowledge on Machine learning concepts & ML algorithms Good-to-Have: 1. Exposure to job scheduling & monitoring environments(E.gControl-M) 2. Any ETL tool exposure 3. Cloud migration experience Responsibility of / Expectations from the Role Developing data processing tasks using PySpark such as reading data from external sources, merging data, performing data enrichment and loading into target data destinations Build scalable and reusable code for optimized data retrieval & movement across sources Develop libraries and maintain processes for business to access data and write MapReduce programs Write scalable and maintainable scripts using Python for data transfers Assessing, prioritizing and guiding team in designing & development of features as per business Requirements Ability to fetche data from various sources and analyzes it for better understanding about how the business performs, and builds AI tools that automate certain processes within the environment Deep technical understanding of how to communicate complex data in an accessible way while also having the ability to visualize their findings Ability to build, train, and deploy ML models into a production-ready hosted environment like AWS Sagemaker Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Core Java, Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None Show more Show less

Posted 2 weeks ago

Apply

6.0 - 11.0 years

14 - 17 Lacs

Mysuru

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worldโ€™s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Job Role Strong Spark programming experience with Java Good knowledge of SQL query writing and shell scripting Experience working in Agile mode Analyze, Design, develop, deploy and operate high-performant and high-quality services that serve users in a cloud environment. Good understanding of client eco system and expectations In charge of code reviews, integration process, test organization, quality of delivery Take part in development. Experienced into writing queries using SQL commands. Experienced with deploying and operating the codes in the cloud environment. Experienced in working without much supervision. Your Profile Primary Skill Java, Spark, SQL Secondary Skill/Good to have Hadoop or any cloud technology, Kafka, or BO. What youll love about working hereShort Description Choosing Capgemini means having the opportunity to make a difference, whether for the worlds leading businesses or for society. It means getting the support you need to shape your career in the way that works for you. It means when the future doesnt look as bright as youd like, you have the opportunity to make changeto rewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensive Learning & Development programs. With us, you will experience an inclusive, safe, healthy, and flexible work environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in our Corporate Social Responsibility and Sustainability initiatives. And whilst you make a difference, you will also have a lot of fun.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role The Senior Software Engineer will be responsible for development solutions with high level of innovation, high quality and faster time to market. This position interacts with product managers, engineering leaders, architects and software developers and business operations on the definition and delivery of highly scalable and secure solutions. The role includes: โ€ข Hands-on developer who writes high quality, secure code that is modular, functional and testable. โ€ข Create or introduce, test, and deploy new technology to optimize the service โ€ข Contribute to all parts of the softwareโ€™s development including design, development, documentation, and testing. โ€ข Have strong ownership โ€ข Communicate, collaborate and work effectively in a global environment. โ€ข Responsible for ensuring application stability in production by creating solutions that provide operational health. โ€ข Mentoring and leading new developers while driving modern engineering practices. โ€ข Communicate, collaborate and work effectively in a global environment All About You โ€ข Strong analytical and excellent problem-solving skills and experience working in an Agile environment. โ€ข Experience with XP, TDD and BDD in the software development processes โ€ข Proficiency in Java, Scala & SQL (Oracle, Postgres, H2, Hive, & HBase) & building pipelines โ€ข Expertise and Deep understanding on Hadoop Ecosystem including HDFS, YARN, MapReduce, Tools like Hive, Pig/Flume, Data processing framework like Spark & Cloud platform, Orchestration Tools - Apache Nifi / Airflow, Apache Kafka โ€ข Expertise in Web applications (Springboot Angular, Java, PCF), Web Services (REST/OAuth), and Big Data Technologies (Hadoop, Spark, Hive, HBase) and tools ( Sonar, Splunk, Dynatrace) โ€ข Expertise SQL, Oracle and Postgres โ€ข Experience in microservices, event driven architecture โ€ข Soft skills: strong verbal and written communication to demo features to product owners; strong leadership quality to mentor and support junior team members, proactive and has initiative to take development work from inception to implementation. โ€ข Familiar with secure coding standards (e.g., OWASP, CWE, SEI CERT) and vulnerability management Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Do you want to be a leader in the team that takes Transportation and Retail models to the next generation? Do you have a solid analytical thinking, metrics driven decision making and want to solve problems with solutions that will meet the growing worldwide need? Then Transportation is the team for you. We are looking for top notch Data Engineers to be part of our world class Business Intelligence for Transportation team. 4-7 years of experience performing quantitative analysis, preferably for an Internet or Technology company Strong experience in Data Warehouse and Business Intelligence application development Data Analysis: Understand business processes, logical data models and relational database implementations Expert knowledge in SQL. Optimize complex queries. Basic understanding of statistical analysis. Experience in testing design and measurement. Able to execute research projects, and generate practical results and recommendations Proven track record of working on complex modular projects, and assuming a leading role in such projects Highly motivated, self-driven, capable of defining own design and test scenarios Experience with scripting languages, i.e. Perl, Python etc. preferred BS/MS degree in Computer Science Evaluate and implement various big-data technologies and solutions (Redshift, Hive/EMR, Tez, Spark) to optimize processing of extremely large datasets in an accurate and timely fashion. Experience with large scale data processing, data structure optimization and scalability of algorithms a plus Key job responsibilities Responsible for designing, building and maintaining complex data solutions for Amazon's Operations businesses Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies Makes efficient use of resources (e.g., system hardware, data storage, query optimization, AWS infrastructure etc.) Knows about recent advances in distributed systems (e.g., MapReduce, MPP Architectures, External Partitioning) Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient Makes enhancements that improve teamโ€™s data architecture, making it better and easier to maintain (e.g., data auditing solutions, automating, ad-hoc or manual operation steps) Owns the data quality of important datasets and any new changes/enhancements Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region youโ€™re applying in isnโ€™t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2941103 Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Core Java, Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

TransUnion's Job Applicant Privacy Notice What We'll Bring We are seeking a talented and experienced Senior Data Engineer/Big Data Developer to join our growing team. In this role, you will be responsible for designing, developing, and maintaining our Big Data infrastructure and pipelines. You will work with large datasets to build scalable solutions for data processing, analytics, and machine learning initiatives. You will collaborate with data scientists, analysts, and other engineers to deliver high-quality, reliable, and performant data solutions that drive business decisions. Responsibilities What You'll Bring: Design, develop, and maintain robust and scalable Big Data pipelines using technologies like Hadoop, Spark, and Scala. Develop and maintain data ingestion, processing, and storage solutions on cloud platforms such as AWS and GCP. Write high-quality, well-documented, and testable code in Java, Scala, Python, and SQL. Optimize data processing performance and ensure data quality. Collaborate with data scientists and analysts to understand their requirements and build data solutions that meet their needs. Work with large datasets and perform data analysis to identify trends and patterns. Implement data governance and security best practices. Contribute to the evolution of our Big Data architecture and technology stack. Troubleshoot and resolve issues in the data pipelines and infrastructure. Stay up-to-date with the latest Big Data technologies and trends. Participate in code reviews and contribute to the team's knowledge sharing. Work in a fast-paced, agile environment. Qualifications Impact You'll Make: Bachelor's degree in Computer Science, Engineering, or a related field. 4+ years of experience in software development with a focus on Big Data technologies. Strong proficiency in Java, Scala, Python, and SQL. Experience with Hadoop ecosystem components such as HDFS, MapReduce, YARN, and Hive. Extensive experience with Apache Spark for large-scale data processing. Experience building and deploying data pipelines on cloud platforms (AWS and/or GCP). Solid understanding of data modeling, data warehousing, and ETL concepts. Experience with data analytics and visualization tools is a plus. Strong understanding of distributed systems principles and architecture. Proficiency in Unix/Linux environments. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Experience with data governance and security best practices. Experience with Agile development methodologies. Bonus Points Experience with data streaming technologies like Kafka or Kinesis. Experience with NoSQL databases like Cassandra or MongoDB. Experience with machine learning frameworks like TensorFlow or PyTorch. AWS or GCP certifications. Contributions to open-source projects. This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Developer, Data Analysis and Consulting Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Company Description: Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose โ€“ to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description: The Staff ML Scientist will work with a team to conduct world-class Applied AI research on data analytics and contribute to the long-term research agenda in large-scale data analytics and machine learning, as well as deliver innovative technologies and insights to Visa's strategic products and business. This role represents an exciting opportunity to make key contributions to Visa's strategic vision as a world-leading data-driven company. The successful candidate must have strong academic track record and demonstrate excellent software engineering skills. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail, and excellent collaboration skills. Essential Functions: Formulate business problems as technical data problems while ensuring key business drivers are collected in collaboration product stakeholders. Work with product engineering to ensure implement-ability of solutions. Deliver prototypes and production code based on need. Experiment with in-house and third-party data sets to test hypotheses on relevance and value of data to business problems. Build needed data transformations on structured and un-structured data. Build and experiment with modeling and scoring algorithms. This includes development of custom algorithms as well as use of packaged tools based on machine learning, analytics, and statistical techniques. Devise and implement methods for adaptive learning with controls on efficiency, methods for explaining model decisions where vital, model validation, A/B testing of models. Devise and implement methods for efficiently monitoring model efficiency and performance in production. Devise and implement methods for automation of all parts of the predictive pipeline to minimize labor in development and production. Contribute to development and adoption of shared predictive analytics infrastructure. This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs. Basic Qualifications: โ€ข 7+ years of relevant work experience with a Bachelorโ€™s Degree or at least 2 years of work experience with an Advanced degree (e.g. Masters, MBA, JD, MD) or 0 years of work experience with a PhD, OR 8+ years of relevant work experience. Preferred Qualifications: โ€ข 7 or more years of work experience with a Bachelors Degree or 4 or more years of relevant experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or up to 3 years of relevant experience with a PhD โ€ข Relevant coursework in modeling techniques such as logistic regression, Naรฏve Bayes, SVM, decision trees, or neural networks. โ€ข Ability to program in one or more scripting languages such as Perl or Python and one or more programming languages such as Java, C++, or C#. โ€ข Experience with one or more common statistical tools such SAS, R, KNIME, MATLAB. โ€ข Deep learning experience with TensorFlow is a plus. โ€ข Experience with Natural Language Processing is a plus. โ€ข Experience working with large datasets using tools like Hadoop, MapReduce, Pig, or Hive is a must. โ€ข Publications or presentation in recognized Machine Learning and Data Mining journals/conferences is a plus. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

You Lead the Way. Weโ€™ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, youโ€™ll learn and grow as we help you create a career journey thatโ€™s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, youโ€™ll be recognized for your contributions, leadership, and impactโ€”every colleague has the opportunity to share in the companyโ€™s success. Together, weโ€™ll win as a team, striving to uphold our company values and powerful backing promise to provide the worldโ€™s best customer experience every day. And weโ€™ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. American Express is on an exciting Big Data Cloud transformation โ€˜Next Gen Big Dataโ€™ focused on providing best in class Data, Analytics and AI/ML experience to Enterprise users and platforms. This Engineering Director role will focus on delivering key platform capabilities around data observability, developer and usecase enablement tooling working closely with Product teams, Cloud Engineering, Info Security, and other Enterprise teams. Responsibilities : Lead solutioning, engineering and delivery of core capabilities in Lumi: Next Gen Big Data platform Delivery best in class Data ops, developer and usecase enablement tooling for users of the platform Able to engage the team on coding practices, architecture, design, get under the hood of complex integrated architectures, coding systems, and interface design Collaborate successfully with product owners, designers, and a broad set of internal technical partners (across multiple internal groups) Ensure product releases are high quality, deliver excellent user experiences, perform seamlessly at scale, and comply with regulatory requirements. Partner with peers in technologies to identify opportunities for code sharing, common services, joint development, etc. Keep up with the latest industry research and emerging technologies to ensure we are appropriately leveraging new techniques and capabilities. Consistently question assumptions, challenge the status quo, and strive for improvement. Demonstrate accountability while leading people with passion, enthusiasm, loyalty and integrity. Own and lead HR processes such as performance reviews, talent development, etc. Flawless interpersonal skills and ability to partner with executive leadership to push technical solutions forward. Recruit top talent with technical skills, growth potential, design sensibility, and emotional intelligence. Lead teams in iterative product development using lean principles. Lead teams to provide 24 X 7 on-call support Minimum Qualifications: 10+ years of Software Engineering experience building and managing Petabyte Scale Data Platforms. Bachelorโ€™s degree in computer science, Compute Engineering, or related field. Experience managing and leading transformation of large-scale data platforms to Public Cloud. Google Cloud Platform, preferred. Experience in managing multiple workstreams to migrate Data and Compute workloads (Hive, MapReduce and Spark) from on-prem Hadoop to Public Cloud Platform. Must have a deep understanding of distributed data management frameworks (like Apache Spark, Apache Beam, Apache Flink etc) Must have a good understanding of Massive Parallel Processing, Postgres and NoSQL (Hbase / Cassandra etc.) systems Demonstrated experience in driving execution of multi-year strategy with defined KPIโ€™s / OKRโ€™s on platform adoption Experience in running developer advocacy programs at scale across Enterprise Cloud certification preferred Strong verbal and written communication skills with an ability to explain complex problems and ideas clearly and succinctly to senior management. Highly motivated self-starter with ability to juggle multiple tasks in a fast-paced, ambiguous environment, with excellent organization skills and careful attention to detail. Proven track record of instilling culture of technical excellence, engineering best practices, and strive for execution efficiencies. We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Big Data Architect Skills Required 4 Years experience in Big data Architect Proficient in Spark, Scala, Hadoop MapReduce/HDFS, PIG, HIVE, AWS cloud computing Hands-on experience in tools like: EMR, EC2, Pentaho BI, Impala, ElasticSearch, Apache Kafka, Node.js, Redis, Logstash, statsD, Ganglia, Zeppelin, Hue, KETTLE Sound experience in Machine learning, Zookeeper, Bootstrap.js, Apache Flume, FluentD, Collectd, Sqoop, Presto, Tableau, R, GROK, MongoDB, Apache Storm, HBASE Hands-on experience in development - Core Java & Advanced Java Job Requirement: Bachelors degree in Computer Science, Information Technology, or MCA 4 Years of experience in Relevant Role Good analytical and problem solving ability Detail oriented with excellent written and verbal communication skills The ability to work independently as well as collaborating with a team. Experience: 10 Years Job Location: Pune/Hyderabad, India Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Big Data Architect Skills Required 4 Years experience in Big data Architect Proficient in Spark, Scala, Hadoop MapReduce/HDFS, PIG, HIVE, AWS cloud computing Hands-on experience in tools like: EMR, EC2, Pentaho BI, Impala, ElasticSearch, Apache Kafka, Node.js, Redis, Logstash, statsD, Ganglia, Zeppelin, Hue, KETTLE Sound experience in Machine learning, Zookeeper, Bootstrap.js, Apache Flume, FluentD, Collectd, Sqoop, Presto, Tableau, R, GROK, MongoDB, Apache Storm, HBASE Hands-on experience in development - Core Java & Advanced Java Job Requirement: Bachelors degree in Computer Science, Information Technology, or MCA 4 Years of experience in Relevant Role Good analytical and problem solving ability Detail oriented with excellent written and verbal communication skills The ability to work independently as well as collaborating with a team. Experience: 10 Years Job Location: Pune/Hyderabad, India Show more Show less

Posted 2 weeks ago

Apply

4.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Description Data Pipeline solutions based on the requirements and incorporating various optimization techniques based on various sources involved and data volume. Understanding of storage architectures such as Data Warehouse, Data Lake, and Lake houses Deciding tech stack and development standards, proposing tech solutions and architectural patterns and recommending best practices for the big data solution Providing thought leadership and mentoring to the data engineering team on how data should be stored and processed more efficiently and quickly at scale Ensure adherence with Security and Compliance policies for the products Stay up to date with evolving cloud technologies and development best practices including open-source software. Work in an Agile Environment and provide optimized solutions to the customers and JIRA for project management Proven problem-solving skills with the ability to anticipate roadblocks, diagnose problems and generate effective solutions Analyze market segments and customer base to develop market solutions Experience in working with batch processing / real-time systems using various Enhance/Support solutions using Pyspark/EMR, SQL and databases, AWS Athena, S3, Redshift, Lambda, AWS Glue, and other Data Engineering technologies. Proficiency in SQL Writing, SQL Concepts, Data Modelling Techniques, Data validation, Data quality check & Data Engineering Concepts Proficiency in design, creation, deployment, review and get the final sign off from the client by following the best practices in SDLC of existing and new products. Experience in technologies like Databricks, HDFS, Redshift, Hadoop, S3, Athena, RDS, Elastic MapReduce on AWS or similar services in GCP/Azure Scheduling and monitoring of Spark jobs using tools like Airflow, Oozie Familiar with version control tools like Git, Code Commit, Jenkins, Code Pipeline Work in a Cross functional team along with other Data Engineers, QA Engineers, and DevOps Engineers. Develop, test, and implement data solutions based on finalized design documents. Familiar with Unix/Linux and Shell Scripting Qualifications Experience: 4-7 years of experience Excellent communication and problem-solving skills. Highly proficient in Project Management principles, methods, techniques, and tools Minimum 2 to 4 years of working experience in Pyspark, SQL, AWS development Experience of working as a mentor for junior team members Hands on experience in ETL process, performance optimization techniques are a must Candidate should have taken part in Architecture design and discussion Minimum of 4 years of experience in working with batch processing/ real-time systems Using various technologies like Databricks, HDFS, Redshift, Hadoop, Elastic MapReduce on AWS, Apache Spark, Hive/Impala and HDFS and NoSQL databases or similar services in Azure or GCP Minimum of 4 years of experience working in Datawarehouse or Data Lake Projects in a role beyond just Data consumption. Minimum of 4 years of extensive working knowledge in AWS building scalable solutions. Equivalent level of experience in Azure or Google Cloud is also acceptable Minimum of 3 years of experience in programming languages (preferably Python) Experience in Pharma Domain will be a very Big Plus. Familiar with tools like Git, Code Commit, Jenkins, Code Pipeline Familiar with Unix/Linux and Shell Scripting Additional Skills: Exposure to Pharma and life sciences would be an added advantage. Certified in any cloud technologies like AWS, GCP, Azure. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Responsibilities Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Implementing data wrangling, scarping, cleaning using both Java or Python Strong experience on data structure. Extensively work on API integration . Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Skills And Qualifications Proficient understanding of distributed computing principles Proficient in Java or Pyhton and some part of machine learning Proficiency with Hadoop v2, MapReduce, HDFS,Pyspark,Spark Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala Experience with Spark Experience with integration of data from multiple data sources Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of various ETL techniques and frameworks, such as Flume Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Experience with Cloudera/MapR/Hortonworks ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citiโ€™s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related field, or equivalent practical experience. 5 years of experience with data processing software (e.g., Hadoop, Spark, Pig, Hive) and algorithms (e.g., MapReduce, Flume). Experience writing software in Java, C++, Python, Go, or JavaScript. Experience with client-facing projects, troubleshooting technical issues, and working with Engineering and Sales Services teams. Preferred qualifications: Experience with data center technology or supply chain. Experience working with data scientists. Experience with a wide range of data engineering and data governance tools and technologies, including cloud platforms (e.g., Google Cloud Platform), data warehousing solutions, data quality tools, and metadata management systems. Experience with collaborative coding and version control. Understanding of analytics and model development lifecycle. Ability to develop and execute roadmaps for scaling data engineering capabilities. About The Job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Googleโ€™s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. As a Data Engineer, you will guide customers on how to ingest, store, process, analyze, explore, and visualize data on Google Cloud Platform. You will be responsible for data migrations and transformations, partner with clients to architect scalable data processing systems, build efficient data pipelines, and resolve platform tests. In this role, you will collaborate with Google Cloud customers and our team to successfully implement Google Cloud products. Google Cloud accelerates every organizationโ€™s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Googleโ€™s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Lead the design, development, and maintenance of scalable and reliable data pipelines. Implement data quality checks, monitoring systems, and data lineage tracking. Advocate best practices for data engineering. Partner with stakeholders to translate data needs into actionable technical designs. Lead technical design reviews and collaborate with cross-functional teams. Implement internal Business Intelligence platform workflows. Design, develop and implement insightful dashboards and reports. Communicate complex technical concepts. Monitor dashboards to drive data-driven recommendations to audiences. Mentor and guide junior data engineers, providing technical guidance, code reviews, and technical development support. Contribute to future data engineering plans and scale the data engineering function within the organization as the first data engineer hire. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies