Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 5 years
5 - 9 Lacs
Bengaluru
Work from Office
Job Title Big Data Analyst Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Preferred Skills: Technology->Big Data->Oracle BigData Appliance Educational Requirements Bachelor of Engineering Service Line Cloud & Infrastructure Services * Location of posting is subject to business requirements
Posted 2 months ago
5 - 7 years
4 - 8 Lacs
Bengaluru
Work from Office
Job Title HADOOP ADMIN Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills:Technology->Big Data - Hadoop->Hadoop Administration Preferred Skills: Technology->Big Data - Hadoop->Hadoop Administration Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit * Location of posting is subject to business requirements
Posted 2 months ago
6 - 8 years
15 - 25 Lacs
Bengaluru, Bangalore Rural
Work from Office
We're Hiring Hadoop Developer for Bangalore Location Job Role : Hadoop Developer Experience : 6-8 Years Location : Bangalore Key Responsibilities: Migrate on-premises big data Spark and Impala/Hive scripts to the Databricks environment. Transform and optimize ETL pipelines for Databricks. Understand and implement data modeling concepts and data warehouse designs. Lead the migration efforts and optimize our data infrastructure on Databricks Solve complex data migration tasks independently. Ensure data accessibility and integrity throughout the migration process Collaborate effectively with cross-functional teams. Communicate progress and challenges clearly to stakeholders. Work within Agile methodologies to deliver projects on time. Qualifications: 6-8 years of experience in Hadoop, Big Data, and SQL. Strong background in data migration projects. Proficiency in Spark and Impala/Hive. Experience with Databricks and cloud platforms, particularly Azure. Good understanding of data modeling concepts and data warehouse designs. Excellent problem-solving skills and a passion for data accessibility. Effective communication and collaboration skills. Experience with Agile methodologies.
Posted 2 months ago
3 - 5 years
5 - 7 Lacs
Delhi NCR, Mumbai, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 months ago
3 - 5 years
5 - 9 Lacs
Delhi, Ahmedabad
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Location: Remote- Delhi / NCR,Bangalore/Bengaluru,Hyderabad/Secunderabad,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 2 months ago
6 - 11 years
15 - 25 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred. Pharma exp MUST
Posted 2 months ago
6 - 11 years
8 - 14 Lacs
Hyderabad
Work from Office
Job Role Strong Spark programming experience with Java Good knowledge of SQL query writing and shell scripting Experience working in Agile mode Analyze, Design, develop, deploy and operate high-performant and high-quality services that serve users in a cloud environment. Good understanding of client eco system and expectations In charge of code reviews, integration process, test organization, quality of delivery Take part in development. Experienced into writing queries using SQL commands. Experienced with deploying and operating the codes in the cloud environment. Experienced in working without much supervision. Your Profile Primary Skill Java, Spark, SQL Secondary Skill/Good to have Hadoop or any cloud technology, Kafka, or BO. What youll love about working hereShort Description Choosing Capgemini means having the opportunity to make a difference, whether for the worlds leading businesses or for society. It means getting the support you need to shape your career in the way that works for you. It means when the future doesnt look as bright as youd like, you have the opportunity to make changeto rewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensive Learning & Development programs. With us, you will experience an inclusive, safe, healthy, and flexible work environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in our Corporate Social Responsibility and Sustainability initiatives. And whilst you make a difference, you will also have a lot of fun.
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Hyderabad
Work from Office
JR REQ---Data Engineer(Pyspark, Big mailto:data)--4to8year---hyd----hemanth.karanam@tcs.com----TCS C2H ---900000
Posted 2 months ago
8 - 13 years
25 - 40 Lacs
Bengaluru
Hybrid
Job Title / Primary Skill: Big Data Developer (Lead/Associate Manager) Management Level: G150 Years of Experience: 8 to 13 years Job Location: Bangalore (Hybrid) Must Have Skills: Big data, Spark, Scala, SQL, Hadoop Ecosystem. Educational Qualification: BE/BTech/ MTech/ MCA, Bachelor or masters degree in Computer Science, Job Overview Overall Experience 8+ years in IT, Software Engineering or relevant discipline. Designs, develops, implements, and updates software systems in accordance with the needs of the organization. Evaluates, schedules, and resources development projects; investigates user needs; and documents, tests, and maintains computer programs. Job Description: We look for developers to have good knowledge of Scala programming skills and Knowledge of SQL Technical Skills: Scala, Python -> Scala is often used for Hadoop-based projects, while Python and Scala are choices for Apache Spark-based projects. SQL -> Knowledge of SQL (Structured Query Language) is important for querying and manipulating data Shell Script -> Shell scripts are used for batch processing of data, it can be used for scheduling the jobs and shell scripts are often used for deploying applications Spark Scala -> Spark Scala allows you to write Spark applications using the Spark API in Scala Spark SQL -> It allows to work with structured data using SQL-like queries and Data Frame APIs. We can execute SQL queries against Data Frames, enabling easy data exploration, transformation, and analysis. The typical tasks and responsibilities of a Big Data Developer include: 1. Data Ingestion: Collecting and importing data from various sources, such as databases, logs, APIs into the Big Data infrastructure. 2. Data Processing: Designing data pipelines to clean, transform, and prepare raw data for analysis. This often involves using technologies like Apache Hadoop, Apache Spark. 3. Data Storage: Selecting appropriate data storage technologies like Hadoop Distributed File System (HDFS), HIVE, IMPALA, or cloud-based storage solutions (Snowflake, Databricks).
Posted 2 months ago
8 - 13 years
10 - 15 Lacs
Chennai
Work from Office
Overall Responsibilities: Translate application storyboards and use cases into functional applications. Design, build, and maintain efficient, reusable, and reliable Java code. Ensure the best possible performance, quality, and responsiveness of applications. Identify bottlenecks and bugs, and devise solutions to these problems. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design approaches that can be served through the Big Data platform. Collaborate and partner with global teams based across different locations. Propose best practices and standards; handover to operations. Perform testing of software prototypes and transfer to the operational team. Process data using Hive, Impala, and HBASE. Perform analysis of large data sets and derive insights. Technical Skills (Category-wise): Java Development: Solid understanding of object-oriented programming and design patterns. Strong Java experience with Java 1.8 or higher version. Strong core Java & multithreading working experience. Understanding of concurrency patterns & multithreading in Java. Proficient understanding of code versioning tools, such as Git. Familiarity with build tools such as Maven and continuous integration like Jenkins/Team City. Big Data Technologies: Experience in Big Data technologies like HDFS, Hive, HBASE, Apache Spark, and Kafka. Experience in building self-service platform-agnostic data access APIs. Service-oriented architecture, and data standards like JSON, Avro, Parquet. Experience in building advanced analytical models based on business context. Data Processing: Comfortable working with large data volumes and understanding logical data structures and analysis techniques. Processing data using Hive, Impala, and HBASE. Strong systems analysis, design, and architecture fundamentals, unit testing, and other SDLC activities. Application performance tuning, troubleshooting experience, and implementation of these skills in the Big Data domain. Additional Skills: Experience in working on Linux shell scripting. Experience in RDMS and NoSQL databases. Basic Unix OS and scripting knowledge. Optional:Familiarity with Arcadia Tool for Analytics. Optional:Familiarity with cloud and container technologies. Experience: 8+ years of relevant experience in Java and Big Data technologies. Day-to-Day Activities: Develop and maintain Java code for Big Data applications. Process and analyze large data sets using Big Data technologies. Collaborate with global teams to design and implement solutions. Perform testing and transfer software prototypes to the operational team. Troubleshoot and resolve performance issues and bugs. Ensure adherence to best practices and standards in development. Qualification: Bachelors or Masters degree in Computer Science, Information Technology, or a related field, or equivalent experience. Soft Skills: Excellent communication and collaboration abilities. Strong interpersonal and teamwork skills. Ability to work under pressure and meet tight deadlines. Positive attitude and strong work ethic. Commitment to continuous learning and professional development. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 3 months ago
8 - 12 years
25 - 30 Lacs
Mumbai
Work from Office
Role Description As Analytics Senior Analyst you will be a member of our Data Analytics Center of Excellence (CoE) for Group Audit. You will pioneer and support Group Audit in implementing innovative and effective analytics processes that are critical to the success of our audit function. Based in the Mumbai office, you will work embedded in audit teams around the world, applying the latest analytics technologies while connecting with the central team in Germany to leverage our core analytics solutions. You will be responsible for supporting all Group Audit functions with data analytics services and audit automation solutions. Team / division overview Analytics is responsible for performing general analytics and statistical modelling in a timely manner to address current and future business needs across various areas of the business. Work includes: Defining data requirements, data collection, processing, cleaning, analysis, modelling, visualisation, development of analytical toolkit and research techniques Examining and identifying data patterns and trends to help answer business questions and improve decision making Identifying areas to increase efficiency and automation of data analysis processes Providing business functions with data insights to help them achieve their strategic goals. Where the roles have a specific focus on Data Science, work will predominantly focus on: Creating data mining architectures/models/protocols, statistical reports, and data analysis methodologies to identify trends in large data sets Researching and applying knowledge of existing and emerging data science principles, theories, and techniques to inform business decisions Representing the bank as a data science practitioner in industry initiatives At higher career levels, they may conduct scientific research projects with the goal of breaking new ground in data analytics Your key responsibilities Evaluate and provide analytics solutions to auditors to identify potential risk, anomalies, detect outliers and identify weaknesses in control activities using analytical tools like SQL and Python. Develop and maintain interactive dashboards using Tableau that effectively convey meaningful insight and track key metrics. Communicate findings and insights to stakeholders including Senior Management that support informed decision-making process and effectively drive business strategy. Collaborate closely with auditors, data owner and subject matter experts to understand business requirement and translate them into analytical solution in an agile and iterative manner. Proactively identify automation opportunities and develop solutions that simplify audit processes and make Group Audit more efficient. Core area for automation will be the Key Control automated testing for Technology, Data and Innovation. Drive innovation across Group Audit, leveraging the experience gained and data collected from successful data analytics projects. Focus here will be especially for IT related audit data and testing. Promote the adoption and integration of data science into the Group Audit organization and inspire Group Audit colleagues by sharing background on successful adoption. Apply highest quality standards as your solutions will become an integral part of audit execution processes. Support upskilling of auditors to gain competencies in data analytics method to transform Group audit into data-driven function. Your skills and experience Ideally first-hand experience in Audit function specifically in risk management and compliance focusing on data analytics and reporting. Masters or bachelors degree (or equivalent PhD appreciated) from an accredited college or university (or equivalent) in a quantitative field (Data Science, Mathematics, Statistics, Physics, Engineering, Computer Science, Economics, etc.) or equivalent work experience. Possess at least 8 years of relevant experience IT auditor experience highly appreciated. Proficiency in SQL and Python for data analysis. Proficiency in reporting and visualization using Tableau. Hands-on experience in ETL and data warehouse, Hadoop, Hive/Impala. Familiarity with GCP services and tools, OpenShift, CDSW. Familiarity with Sentiment Analysis and Natural Language Processing (NLP). Excellent verbal and written communication skills with the ability to convey complex information in clear and concise manner to Senior Management, Audit committees and other stakeholder. Strong problem-solving and analytical skill to interpret complex data and derive actionable insights. A creative technologist passionate about data and information with an intrinsic motivation and curious to learn new technologies and frameworks to adopt data analytics for new ways of auditing.
Posted 3 months ago
8 - 13 years
18 - 27 Lacs
Bengaluru
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
8 - 13 years
18 - 30 Lacs
Pune
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
8 - 13 years
18 - 25 Lacs
Hyderabad
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.
Posted 3 months ago
7 - 11 years
9 - 13 Lacs
Mumbai
Work from Office
Skill required: Talent & HR - SAP Talent & HR Designation: PPSM Specialist Qualifications: Any Graduation Years of Experience: 7 to 11 years Language - Ability: English(International) - Proficient What would you do? Improve workforce performance and productivity, boosts business agility, increases revenue and reduces costsTalent & HR processIn this role, you will be expected to leverage the part of the enterprise resource planning (ERP) that handles employee records and provides a framework to automate HR services like payroll, benefits, personnel activity and compliance. What are we looking for? in-depth knowledge of PMO activities excellent organizational skills, and strong attention to detail.This role involves collaboration with multiple stakeholders, supporting both project-specific processes, and managing critical tasks like onboarding, reporting, and session coordination. Microsoft Project Plan /ADO Maintenance Reporting Contractors Management (Ad-hoc) Proficient in tools such as ADO Microsoft Project Google Suite Beeline, and MS Office (Excel, PowerPoint). Roles and Responsibilities: In this role you are required to do analysis and solving of moderately complex problems May create new solutions, leveraging and, where needed, adapting existing methods and procedures The person would require understanding of the strategic direction set by senior management as it relates to team goals Primary upward interaction is with direct supervisor May interact with peers and/or management levels at a client and/or within Accenture Guidance would be provided when determining methods and procedures on new assignments Decisions made by you will often impact the team in which they reside Individual would manage small teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualifications Any Graduation
Posted 3 months ago
5 - 10 years
15 - 25 Lacs
Chennai, Bengaluru, Bangalore Rural
Work from Office
5+ years of data engineering experience. Strong skills in Azure, Python, or Scala. Expertise in Apache Spark, Databricks, and SQL. Build scalable data pipelines and optimize workflows. Migrate Spark/Hive scripts to Databricks.
Posted 3 months ago
6 - 10 years
10 - 14 Lacs
Hyderabad
Work from Office
As an Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong technical abilities to understand, design, write and debug complex code.Bigdata, Pyspark,Scala,hadoop,Hive,Java,Python.Develops applications on Big Data technologies including API development, Knowledge in Relational Databases experience in troubleshooting, monitoring and performance tuning of Spark jobs.Presto,Impala, HDFS, Linux.. . Good to Have;-knowledge of Analytics libraries, open-source Natural Language Processing, statistical and big data computing libraries. Hands on Experience on cloud technology AWS/GCP Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 months ago
4 - 8 years
5 - 15 Lacs
Mumbai
Work from Office
Technology Analyst Development, Analysis, Modelling, Support Mandatory skills* HDFS, Ozone, Hive, Impala, Spark, Atlas, Ranger Kafka, Flink, Spark Streaming Java, Python/PySpark Experience with CI/CD (GitLab/GitHub, Jenkins, Ansible, Nexus) for automated build & test Excellent communication Bachelor's degree in Computer Science or equivalent, Software Engineering, or a related field
Posted 3 months ago
4 - 8 years
5 - 15 Lacs
Bengaluru
Work from Office
Technology Analyst Development, Analysis, Modelling, Support Mandatory skills* HDFS, Ozone, Hive, Impala, Spark, Atlas, Ranger Kafka, Flink, Spark Streaming Java, Python/PySpark Experience with CI/CD (GitLab/GitHub, Jenkins, Ansible, Nexus) for automated build & test Excellent communication Bachelor's degree in Computer Science or equivalent, Software Engineering, or a related field
Posted 3 months ago
7 - 11 years
15 - 20 Lacs
Bengaluru
Work from Office
Role Description Overview Our Technology, Data, and Innovation (TDI) strategy is focused on strengthening engineering expertise, introducing an agile delivery model, as well as modernizing the Bank's Information Technology (IT) infrastructure with long-term investments and taking advantage of cloud computing. We continue to invest and build a team of visionary tech talentwhowill ensure we thrive in this period of unprecedented change for the industry, so we are seeking aLead Engineerto work in the Transaction Monitoring and Data Controls team. You willbe hands on technical engineer withinour delivery pods and deliver software solutions. As lead engineerlead you willdesign software architecture and implement complex solutions, drivingre-useand best practices. You will contribute to strategic design decisions and define engineering approaches that can be disruptive, with the goals of simplifying architectures, reducing technical debt. Your key responsibilities Leverage best practices - Build Data Driven Decisions Defineand buildapplications for re-platform or re-architect strategies and implement blueprints and patterns for common application architectures Collaborateacross the TDI areas such as Cloud Platform, Security, Data, Risk&Compliance areasto create optimum solutions for the Business, increasing re-use, creating best practice, and sharing knowledge Driveoptimizationsin software development life cycle (SDLC) processtoprovide productivity improvements, including tools and techniques Enablethe adoption of practices such as Site Reliability Engineer (SRE) andDev/SecOpsto minimize toil and manual tasks and increase automation and stability. Your skills and experience Skills Youll Need Full experience of all Agile software development frameworks and processes Technical architecture and software design with engineer, focused on building working examples and reference implementations in code Deep professional expertise in Python/pySpark, Docker, Kubernetes, automated testing for Data driven projects Sound knowledge of Big Data technologies Hive, Impala, Spark, BigQuery with the ability to write high-performing & efficient Structured Query Language (SQL) and optimize/simplify existing queries You have experience inimplementing applications ontocloud platforms (Azure, AWS or Global Control Programme (GCP)) and usage of their major components (Software Defined Networks, Identity and Access Management (IAM), Compute, Storage, etc.) in order todefinecloud native application architecturessuch as Microservices, Service Mesh or Data Streaming applications. Skills That Will Help You Excel Be a team-player You would adopt anautomation-first approaches totesting, deployment, security, and compliance of solutions through Infrastructure as Code and automated policy enforcement You enjoy supporting our community of engineersand creates opportunities for progression, promotingcontinuous learning and skills development.
Posted 3 months ago
4 - 9 years
5 - 15 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
About Client Hiring for One of Our Multinational Corporations! Job Title: Data Engineer (Scala, Spark, Hadoop) Developer Location: Bangalore Job Type: Full Time WORK FROM OFFICE Job Summary: We are seeking a talented and motivated Data Engineer with strong expertise in Scala , Apache Spark , and Hadoop to join our growing team. As a Data Engineer, you will be responsible for building, optimizing, and maintaining scalable data pipelines, data processing systems, and data storage solutions. The ideal candidate will be passionate about working with big data technologies and developing innovative solutions for processing and analyzing large datasets. Key Responsibilities: Design, develop, and implement robust data processing pipelines using Scala , Apache Spark , and Hadoop frameworks. Develop ETL processes to extract, transform, and load large volumes of structured and unstructured data into data lakes and data warehouses. Work with large datasets to optimize performance, scalability, and data quality. Collaborate with cross-functional teams, including Data Scientists, Analysts, and DevOps, to deliver end-to-end data solutions. Ensure data processing workflows are automated, monitored, and optimized for efficiency and cost-effectiveness. Troubleshoot and resolve data issues, ensuring data integrity and quality. Work on the integration of various data sources into the Hadoop ecosystem and ensure effective data management. Develop and implement best practices for coding, testing, and deployment of data processing pipelines. Document and maintain clear and comprehensive technical documentation for data engineering processes and systems. Stay up-to-date with the latest industry trends, tools, and technologies in the data engineering and big data ecosystem. Required Skills and Qualifications: Proven experience as a Data Engineer, Data Developer, or similar role working with Scala , Apache Spark , and Hadoop . Strong knowledge of big data processing frameworks such as Apache Spark, Hadoop, HDFS, and MapReduce. Experience with distributed computing and parallel processing techniques. Solid experience with ETL processes and working with relational and NoSQL databases (e.g., MySQL, MongoDB, Cassandra, etc.). Proficiency in SQL for querying large datasets. Strong experience with data storage technologies such as HDFS , Hive , HBase , or Parquet . Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data-related services (e.g., S3, Redshift, BigQuery). Experience with workflow orchestration tools such as Airflow , Oozie , or Luigi . Knowledge of data warehousing , data lakes , and data integration patterns. Familiarity with version control tools such as Git and CI/CD pipelines. Strong problem-solving skills and the ability to debug complex issues. Excellent communication skills and the ability to collaborate with different teams. Preferred Skills: Experience with streaming data technologies like Kafka , Flink , or Kinesis . Familiarity with data visualization tools (e.g., Tableau, Power BI) and reporting. Knowledge of machine learning models and working with Data Science teams. Experience working in an Agile/Scrum environment. Degree in Computer Science, Engineering, Mathematics, or a related field. Why Join Us? Be a part of an innovative and dynamic team working on cutting-edge data engineering technologies. Opportunities for growth and career advancement in the data engineering domain. Competitive salary and benefits package. Flexible work arrangements and a supportive work environment Srishty Srivastava Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:8067432456 srishty.srivastava@blackwhite.in |www.blackwhite.in
Posted 3 months ago
6 - 11 years
8 - 12 Lacs
Hyderabad
Work from Office
About The Role :: Candidate should have minimum 3+ years hands-on experience in Data engineering stream. Should have good knowledge/experience on Hadoop, Spark, Impala and performance tuning. Should have good knowledge/experience on Java programming language. Should have good knowledge/experience on SQL and Oracle. Should have certification in Java development or Spark.
Posted 3 months ago
10 - 15 years
12 - 17 Lacs
Hyderabad
Work from Office
Minimum 10 years experience in design, architecture or development in Analytics and Data Warehousing Have experience in solution design, solution governance and implementing end-to-end Big Data solutions using Hadoop eco-systems (Hive, HDFS, Pig, HBase, Flume, Kafka, Sqoop, YARN, Impala) Possess ability to produce semantic, conceptual, logical and physical data models using data modelling techniques such as Data Vault, Dimensional Modelling, 3NF, etc. Has the ability to design data warehousing and enterprise analytics-based solutions using Teradata or relevant data platforms Can demonstrate expertise in design patterns (FSLDM, IBM IFW DW) and data modelling frameworks including dimensional, star and non-dimensional schemas Possess commendable experience in consistently driving cost effective and technologically feasible solutions, while steering solution decisions across the group, to meet both operational and strategic goals is essential. Are adept with abilities to positively influence the adoption of new products, solutions and processes, to align with the existing Information Architectural design would be desirable Have Analytics & Data/BI Architecture appreciation and broad experience across all technology disciplines, including project management, IT strategy development and business process, information, application and business process. Have extensive experience with Teradata data warehouses and Big Data platforms on both On-Prim and Cloud platform. Extensive experience in large enterprise environments handling large volume of datasets with High Service Level Agreement(s) across various business functions/ units. Have experience leading discussions and presentations. Experience in driving decisions across groups of stakeholders.
Posted 3 months ago
8 - 13 years
6 - 10 Lacs
Chennai
Work from Office
Must have Strong Business Analysis and data analysis skills Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery etc. Working experience in banking domain Expertise in Unix or Linux Good knowledge in Sql Oracle Database, SQL server DB, Terradata etc Hands-on with Big Data Technologies, Hadoop, Hive, HQL, Pyspark, Spark, Kafka, Impala and related tools Good handle on working in distributed computing and cloud services platform AWS, Azure etc Strong in at least one of the programming languages Python, Scala, Java etc. and programming basics Data structures Excellent communication and collaboration skills, with the ability to work effectively with technical and nont echnical stakeholders. Experience with IT service management tools, such as JIRA, ServiceNow, or similar. Strong understanding of data security, data governance, and compliance regulations
Posted 3 months ago
7 - 12 years
5 - 9 Lacs
Bengaluru
Work from Office
Business Case: Caspian is the big Data Cluster for NFRT managed and hosted by the Central Data team. It is a critical TIER 1 platform for multiple business functions and processes to operate across NFRT. Given the technology strategy and principles, Data driven design and products are a key pillar and this position is extremely critical to strengthen the current system and continue to build/develop as per the future objectives/strategy of NFRT. :As a Big Data Platform Engineer you will be responsible for the technical delivery of our Data Platform's core functionality and strategic solutions. This includes the development of reusable tooling/API's, applications, data stores, and software stack to accelerate our relational data warehousing, big data analytics and data management needs. This individual will also be responsible for designing and developing strategic solutions that utilize big data, cloud and other modern technologies in order to meet our constantly changing business requirement. Day-to-day management of several small development teams focused on our Big Data platform and Data management applications and Collaboration and co-ordination with multiple stake holders like Hadoop Data Engineering Team, Application Team and Unix Ops Team to ensure the stability of our Big Data platform. Skills Required: Strong technical experience in Scala, Java, Python and Spark for designing , creating and maintaining big data applications.Experience maintaining Cloudera Hadoop infrastructure such as HDFS, YARN, Spark, Impala and edge nodes.Experience with developing Cloud based Big Data solutions on AWS or Azure Strong SQL skills with commensurate experience in a large database platform Experience in complete SDLC process and Agile MethodologyStrong oral and written communication Experience with Cloud Data Platforms like Snowflake or Databricks is an added advantage
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2