Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6 - 10 years
13 - 17 Lacs
Hyderabad
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above
Posted 3 months ago
4 - 8 years
8 - 12 Lacs
Hyderabad
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Full Stack Developer who is a skilled professional with exposure to cloud platforms, DevOps, and data visualization, preferably with networking domain (Cisco) product exposure. You will collaborate with internal teams to design, develop, deploy, and maintain software applications at scale. To succeed as a Full Stack Developer, you will be required to ensure the timely completion and approvals of project deliverables. You will also be expected to recommend new technologies and techniques for application development. What You?ll Do Design, develop, deploy, and maintain software applications at scale using Java / J2EE, JavaScript frameworks (Angular or React) and associated technologies Deploy software using CI / CD tools such as Jenkins Understand the technologies implemented, and interface with the project manager on status and technical issues Solve and articulate simple and complex problems with application design, development, and user experiences Collaborate with other developers and designers, as well as assist with technical matters when required Expertise You?ll Bring Experience: Proven ability in Java / j2ee, Spring, Python, Docker, Kubernetes, and Microservices Knowledge in the distributed technologies below will give you an added advantage Apache Spark MapReduce Principles Kafka (MSK) Apache Hadoop (AWS EMR) Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above
Posted 3 months ago
5 - 10 years
15 - 25 Lacs
Bengaluru
Hybrid
Required skills- - Relevant experience with Scala-Spark Big Data development. - Strong Database experience preferably with Hadoop, DB2, or Sybase. - Good understanding of Hadoop (HDFS) eco-system - Complete SDLC process and Agile Methodology (Scrum) - Strong oral and written communication skills - Experience working within a scrum team - Excellent interpersonal skills and professional approach - The ability to investigate and solve technical problems in the context of supporting production applications - Hands-on Data Mining and analytical work experience, big data or Scala on Spark - Unix OS, Scripting, Python - Good understanding of DevOps concepts including working experience of CI/CD tools like Jenkins
Posted 3 months ago
2 - 5 years
14 - 17 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using PySpark. Your typical day will involve collaborating with cross-functional teams, developing and deploying PySpark applications, and acting as the primary point of contact for the project. Roles & Responsibilities: Lead the effort to design, build, and configure PySpark applications, collaborating with cross-functional teams to ensure project success. Develop and deploy PySpark applications, ensuring adherence to best practices and standards. Act as the primary point of contact for the project, communicating effectively with stakeholders and providing regular updates on project progress. Provide technical guidance and mentorship to junior team members, ensuring their continued growth and development. Stay updated with the latest advancements in PySpark and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience with Hadoop, Hive, and other Big Data technologies. Solid understanding of software development principles and best practices. Experience with Agile development methodologies. Strong problem-solving and analytical skills. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualifications Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication within the team and stakeholders. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the application development process effectively Ensure timely delivery of projects Mentor and guide team members for their professional growth Professional & Technical Skills: Must To Have Skills:Proficiency in PySpark Strong understanding of big data processing Experience in designing and implementing scalable applications Knowledge of cloud platforms like AWS or Azure Hands-on experience in data processing and analysis Additional Information: The candidate should have a minimum of 5 years of experience in PySpark This position is based at our Bengaluru office A 15 years full-time education is required Qualifications 15 years full time education
Posted 3 months ago
10 - 14 years
12 - 16 Lacs
Pune
Work from Office
Client expectation apart from JD Longer AWS data engineering experience (glue, spark, ECR ECS docker), python, pyspark, hudi/iceberg/Terraform, Kafka. Java in early career would be a great addition but not a prio. (for the OOP part and java connectors).
Posted 3 months ago
6 - 8 years
8 - 12 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
5 - 9 years
10 - 14 Lacs
Mumbai
Work from Office
REQUI RE MENT Excellent analytical and problem-solving skills, the ability to understand complex problems, and the ability to generate appropriate technical solutions. Knowledge of AI and ML technology frameworks and solutions is preferred. Knowledge of different data security standards (e.g. ISO27K) and regulatory requirements (e.g. PCI-DSS, GDPR, etc.) Excellent skills in NoSql and Oracle, preferably in a UNIX/Linux environment. Knowledge of database modeling and optimization techniques. Must be able to map business requirements to database models/ERD. At least 3 years of experience in handling database server security implementation, high-availability solutions, backup & recovery, performance tuning & monitoring, capacity planning, mirroring, and clustering. Maintain database development guidelines and standards. Knowledge of Reporting/Business Intelligence databases and tools. Data visualization, data migration, and data modeling. DBMS software, including SQL Server. Database and cloud computing design, architectures, and data lakes. Working knowledge of Hadoop technologies like Pig, Hive, and MapReduce. Applied mathematics and statistics. Expert knowledge of Oracle Database and working knowledge of Microsoft SQL Server, MySQL, PostgreSQL, and IBM Db2. RESPON SIBILITIES Collaborate with systems architects, security architects, business owners, and business analysts to understand business requirements. Develop and document database architectural strategies at the modeling, design, and implementation stages to address business requirements. Design and document databases to support business applications, ensuring system scalability, security, efficiency, compliance, performance, and reliability. Design innovative data services solutions, using SQL & NOSQL, be able to do text analytics, and be capable of doing real-time analysis of big data while complying with all applicable data security and privacy requirements, such as GDP. Should be able to lead engagements with OEMs of SQL & NoSQL providers and provide expert knowledge and troubleshooting skills for incident resolution. QUALI FICATION Bachelors degree in Computer Science, Information Technology, or related field. Relevant certifications in database administration or related areas. OTHER ATTRIBUTES Strong problem-solving skills and attention to detail. Excellent communication and interpersonal skills. Ability to work independently and in a team environment. Willingness to learn and adapt to new technologies. LOCAT ION Mumbai
Posted 3 months ago
6 - 10 years
10 - 16 Lacs
Hyderabad
Work from Office
Overview Data Modeling: Develop and implement predictive and prescriptive models to support business decision-making. Ensure data quality and integrity throughout the analysis process. o Design and implement data infrastructure and processing workflows required to support data science, machine learning, BI and reporting o Build robust, efficient, and reliable data pipelines consisting of diverse data sources (CRM, Nielsen, sDNA, Customer & Campaign Data) o Design and develop real time streaming and batch processing pipeline solutions o Own the data expertise and data quality for the pipelines o Clean and prepare data for use in advanced analytics, artificial intelligence, and machine learning projects o Build analysis in Python with accompanying documentation o Combine and manipulate data sources to produce value-added analyses from which insights and opportunities can be extracted Communicate effectively with business to understand business needs, anticipate requirements more accurately and suggest best solution approach Lead design, code & process review sessions to ensure compliance with established standards, policies, and performance guidelines.Design BI solutions based on guided analytics approach and best in class story telling delivery mode Responsibilities Data Modeling: Develop and implement predictive and prescriptive models to support business decision-making. Ensure data quality and integrity throughout the analysis process.o Design and implement data infrastructure and processing workflows required to support data science, machine learning, BI and reporting o Build robust, efficient, and reliable data pipelines consisting of diverse data sources (CRM, Nielsen, sDNA, Customer & Campaign Data)o Design and develop real time streaming and batch processing pipeline solutionso Own the data expertise and data quality for the pipelineso Clean and prepare data for use in advanced analytics, artificial intelligence, and machine learning projectso Build analysis in Python with accompanying documentationo Combine and manipulate data sources to produce value-added analyses from which insights and opportunities can be extracted Communicate effectively with business to understand business needs, anticipate requirements more accurately and suggest best solution approach Lead design, code & process review sessions to ensure compliance with established standards, policies, and performance guidelines.Design BI solutions based on guided analytics approach and best in class story telling delivery mode Qualifications Bachelor's or advanced degree in a quantitative field for instance a Bachelor or Masters degree Scince or Business management (e.g., Computer Science, Mathematics, Statistics, Data Science) or equivalent experience 6 years + of Power-BI and Python experience Strong experience in Power BI (Backend/Frontend) to develop compelling reports and dashboards Experience in Databricks and Python/PySpark Should be able to perform an ETL transformation of a large-scale datasets and combine different data sources Experience with multiple data technologies and concepts such as SQL, NoSQL, Airflow, Kafka, Hadoop, Hive, Spark, MapReduce and Columnar databases 2+ years of experience with schema design and dimensional data modeling Experience writing production code for Python or JVM-based systems, but you know a few other languages and like the right tool for the job Fluent English communication skill Experience optimizing larger applications to increase speed, scalability, and extensibility Structured thinker / problem solver Comfort with ambiguity as demonstrated by being able to adapt in less-structured environments Strong communication and interpersonal skills, with the ability to effectively convey complex concepts to both technical and non-technical stakeholders.
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Pune
Work from Office
Job Purpose Effectively capable of handling Development and Support in Scala/Python/Databricks technology Duties and Responsibilities Would be required to understand business logic from PMO team and convert it into technical specification. Would be responsible to build up data integration module between different systems. Would be responsible to build data expertise and own data quality. Would be responsible to prepare processes and best practices in day-to-day activities and get it implemented by team. Key Decisions / Dimensions Fast Decision making on Production Fix is expected. Major Challenges Define and Manage SLAs for all Support Activities Define and Meet Milestones for all Project Deliveries Required Qualifications and Experience a)Qualifications Min. Qualification required is Graduation b)Work Experience Relevant work experience of 2 to 5 Years. C)Skills Keywords Hands on Experience on Scala/Python Knowledge on Joins, classes, functions, ETL in Scala/Python Knowledge in techniques on Optimization of Code in Databricks Experience on other Azure platforms will be added advantage
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Pune
Work from Office
Job Purpose Effectively capable of handling Development and Support in Scala/Python/Databricks technology Duties and Responsibilities Would be required to understand business logic from PMO team and convert it into technical specification. Would be responsible to build up data integration module between different systems. Would be responsible to build data expertise and own data quality. Would be responsible to prepare processes and best practices in day-to-day activities and get it implemented by team. Key Decisions / Dimensions Fast Decision making on Production Fix is expected. Major Challenges Define and Manage SLAs for all Support Activities Define and Meet Milestones for all Project Deliveries Required Qualifications and Experience a)Qualifications Min. Qualification required is Graduation b)Work Experience Relevant work experience of 2 to 5 Years. C)Skills Keywords Hands on Experience on Scala/Python Knowledge on Joins, classes, functions, ETL in Scala/Python Knowledge in techniques on Optimization of Code in Databricks Experience on other Azure platforms will be added advantage
Posted 3 months ago
8 - 10 years
13 - 17 Lacs
Chennai
Work from Office
Design and implement efficient data handling system based on Java and Spark. Perform and oversee tasks such as programming, writing scripts, calling APIs, and writing complex SQL queries. Implement data stores that support the scalable processing and storage of our high-frequency data. Skill Sets Must have Minimum 8 to 10 years of experience in Spring boot, Advanced Java and Spark. Bachelors degree or higher in computer science, data science, or a related field. Hands on experience with data cleaning, visualization, and reporting. Experience working in an agile environment. Experience with platforms such as MapReduce, Spark, Hive. Excellent analytical and problem solving skills. c. Good to have Working experience in VueJS. Familiarity with the Hadoop ecosystem Experience with AWS is a plus. Experience with Python and Scala is a plus. At least 2 years of relevant experience with real-time data stream platforms such as Kafka and Spark Streaming.
Posted 3 months ago
3 - 6 years
3 - 6 Lacs
Hyderabad
Work from Office
Hadoop Admin 1 Position Hadoop administration Automation (Ansible, shell scripting or python scripting) DEVOPS skills (Should be able to code at least in one language preferably python Location: Preferably Bangalore, Otherwise Chennai, Pune, Hyderabad Working Type: Remote
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Bengaluru
Work from Office
Job Title Spark Developer - Immediate Joiner Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Preferred Skills: Technology->Big Data - Data Processing->Spark->Spark Sreaming Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCA Service Line Data & Analytics Unit* Location of posting is subject to business requirements CLICK TO PROCEED
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Hyderabad
Work from Office
JR REQ -BigData Engineer --4to8year---HYD----Karuppiah Mg --- TCS C2H ---900000
Posted 3 months ago
6 - 11 years
0 - 3 Lacs
Bengaluru
Work from Office
SUMMARY This is a remote position. Job Description: EMR Admin We are seeking an experienced EMR Admin with expertise in Big data services such as Hive, Metastore, H-base, and Hue. The ideal candidate should also possess knowledge in Terraform and Jenkins. Familiarity with Kerberos and Ansible tools would be an added advantage, although not mandatory. Additionally, candidates with Hadoop admin skills, proficiency in Terraform and Jenkins, and the ability to handle EMR Admin responsibilities are encouraged to apply. Location: Remote Experience: 6+ years Must-Have: The candidate should have 4 years in EMR Admin. Requirements Requirements: Proven experience in EMR administration Proficiency in Big data services including Hive, Metastore, H-base, and Hue Knowledge of Terraform and Jenkins Familiarity with Kerberos and Ansible tools (preferred) Experience in Hadoop administration (preferred)
Posted 3 months ago
3 - 8 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Hadoop Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years of full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Apache Hadoop. Your typical day will involve working with the Hadoop ecosystem, developing and testing applications, and troubleshooting issues. Roles & Responsibilities: Design, develop, and test applications using Apache Hadoop and related technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Troubleshoot and debug issues in the Hadoop ecosystem, including HDFS, MapReduce, Hive, and Pig. Ensure the performance, scalability, and reliability of applications by optimizing code and configurations. Professional & Technical Skills: Must To Have Skills:Experience with Apache Hadoop. Strong understanding of the Hadoop ecosystem, including HDFS, MapReduce, Hive, and Pig. Experience with Java or Scala programming languages. Familiarity with SQL and NoSQL databases. Experience with data ingestion, processing, and analysis using Hadoop tools like Sqoop, Flume, and Spark. Additional Information: The candidate should have a minimum of 3 years of experience in Apache Hadoop. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Pune office. Qualification 15 years of full time education
Posted 3 months ago
2 - 4 years
5 - 9 Lacs
Kolkata
Work from Office
Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : PySpark, Bigdata Analytics Architecture and Design, Scala Programming Language, ETL processes, and data wareh, Data Engineer with a focus on Minimum 2 year(s) of experience is required Educational Qualification : Btech Summary :As an Application Designer, you will be responsible for assisting in defining requirements and designing applications to meet business process and application requirements. Your typical day will involve working with Google BigQuery and utilizing your skills in Scala Programming Language, PySpark, ETL processes, and data warehousing to design and develop data-driven solutions. Roles & Responsibilities: Design and develop applications using Google BigQuery to meet business process and application requirements. Collaborate with cross-functional teams to define requirements and ensure that applications are designed to meet business needs. Utilize your skills in Scala Programming Language, PySpark, ETL processes, and data warehousing to design and develop data-driven solutions. Ensure that applications are designed to be scalable, reliable, and maintainable. Stay updated with the latest advancements in Big Data technologies and integrate innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Proficiency in Google BigQuery. Good To Have Skills:Scala Programming Language, Bigdata Analytics Architecture and Design, PySpark, ETL processes, and data warehousing. Solid understanding of data engineering principles and best practices. Experience in designing and developing data-driven solutions. Experience in working with large datasets and designing scalable solutions. Strong problem-solving skills and ability to work in a fast-paced environment. Additional Information: The candidate should have a minimum of 2 years of experience in Google BigQuery. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Hyderabad office. Qualification Btech
Posted 3 months ago
5 - 9 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Scala Programming Language Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : NA Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using Scala Programming Language. Your typical day will involve collaborating with cross-functional teams, managing project timelines, and ensuring the successful delivery of high-quality software solutions. Roles & Responsibilities: Lead the design, development, and deployment of software applications using Scala Programming Language. Collaborate with cross-functional teams to identify and prioritize project requirements, ensuring timely delivery of high-quality software solutions. Manage project timelines and resources, ensuring successful project delivery within budget and scope. Provide technical leadership and mentorship to junior team members, promoting a culture of continuous learning and improvement. Stay up-to-date with emerging trends and technologies in software engineering, applying innovative approaches to drive sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Proficiency in Scala Programming Language. Good To Have Skills:Experience with Java, Python, or other programming languages. Strong understanding of software engineering principles and best practices. Experience with Agile development methodologies and tools such as JIRA or Confluence. Experience with cloud-based technologies such as AWS or Azure. Solid grasp of database technologies such as SQL or NoSQL. Additional Information: The candidate should have a minimum of 5 years of experience in Scala Programming Language. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions. This position is based at our Hyderabad office. Qualification NA
Posted 3 months ago
4 - 6 years
6 - 8 Lacs
Hyderabad
Work from Office
Python/Spark/Scala experience AWS experienced will be added advantage. Professional hand-on experience in scala/python Having around 4 to 6 years of experience with excellent coding skills in Java programming language. Having knowledge (or hands on experience) of big data platform and frameworks is good to have. Candidate should have excellent code understanding skills where in should be able to read opensource code (Trino)and build optimizations or improvements on top of it. Working experience in Presto/Trino is a great advantage. Knowledge in Elastic Search, Grafana will be good to have. Experience working under Agile methodology Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Having around 4 to 6 years of experience with excellent coding skills in Java programming language. Having knowledge (or hands on experience) of big data platform and frameworks is good to have. Candidate should have excellent code understanding skills where in should be able to read opensource code (Trino)and build optimizations or improvements on top of it
Posted 3 months ago
3 - 5 years
3 - 8 Lacs
Noida
Work from Office
We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. As a Data Engineer, you will collaborate closely with our Data Scientists to develop and deploy machine learning models. Proficiency in below listed skills will be crucial in building and maintaining pipelines for training and inference datasets. Responsibilities: • Work in tandem with Data Scientists to design, develop, and implement machine learning pipelines. • Utilize PySpark for data processing, transformation, and preparation for model training. • Leverage AWS EMR and S3 for scalable and efficient data storage and processing. • Implement and manage ETL workflows using Streamsets for data ingestion and transformation. • Design and construct pipelines to deliver high-quality training and inference datasets. • Collaborate with cross-functional teams to ensure smooth deployment and real-time/near real-time inferencing capabilities. • Optimize and fine-tune pipelines for performance, scalability, and reliability. • Ensure IAM policies and permissions are appropriately configured for secure data access and management. • Implement Spark architecture and optimize Spark jobs for scalable data processing. Total Experience Expected: 04-06 years
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Pune
Work from Office
Job Title:Solutions IT Developer - Kafka SpecialistLocation:Toronto/ Offshore -Pune About The Role ::We are seeking a seasoned Solutions IT Developer with a strong background in Apache Kafka to join our developer advocacy function in our event streaming team. The ideal candidate will be responsible for Kafka code reviews with clients, troubleshooting client connection issues with Kafka and supporting client onboarding to Confluent Cloud. This role requires a mix of software development expertise along with a deep understanding of Kafka architecture, components, and tuning. Responsibilities:1. Support for Line of Business (LOB) Users: - Assist LOB users with onboarding to Apache Kafka (Confluent Cloud/Confluent Platform), ensuring a smooth integration process and understanding of the platforms capabilities. 2. Troubleshooting and Technical Support: - Resolve connectivity issues, including client and library problems, to ensure seamless use of our Software Development Kit (SDK), accelerators and Kafka client libraries. - Address network connectivity and access issues - Provide a deep level of support for Kafka library, offering advanced troubleshooting and guidance. - Java 11, 17 and Spring Boot (Spring Kafka, Spring Cloud Stream Kafka Spring Cloud Stream) experience 3. Code Reviews and Standards Compliance: - Perform thorough code reviews to validate client code against our established coding standards and best practices. - Support the development of Async specifications tailored to client use cases, promoting effective and efficient data handling.4. Developer Advocacy: - Act as a developer advocate for all Kafka development at TD, fostering a supportive community and promoting best practices among developers.5. Automation and APIs: - Manage and run automation pipelines for clients using REST APIs as we build out GitHub Actions flow.6. Documentation and Knowledge Sharing: - Update and maintain documentation standards, including troubleshooting guides, to ensure clear and accessible information is available. - Create and disseminate knowledge materials, such as how-tos and FAQs, to answer common client questions in general chats related to Kafka development. Role Requirements:Qualifications:- Bachelors degree in Computer Science- Proven work experience as a Solutions Developer or similar role with a focus on Kafka design and development Skills:- In-depth knowledge of Java 11, 17 and Spring Boot (Spring Kafka, Spring Cloud Stream Kafka Spring Cloud Stream)- Deep knowledge of Apache Kafka, including Kafka Streams and Kafka Connect experience- Strong development skills in one or more high-level programming languages (Java, Python).- Familiarity with Kafka API development and integration.- Understanding of distributed systems principles and data streaming concepts.- Experience with source control tools such as Git, and CI/CD pipelines.- Excellent problem-solving and critical-thinking skills.Preferred:- Kafka certification (e.g., Confluent Certified Developer for Apache Kafka).- Experience with streaming data platforms and ETL processes.- Prior work with NoSQL databases and data warehousing solutions. Experience:- Minimum of 4 years of hands-on experience with Apache Kafka.- Experience with large-scale data processing and event-driven system design. Other Requirements:- Good communication skills, both written and verbal.- Ability to work independently as well as collaboratively.- Strong analytical skills and attention to detail.- Willingness to keep abreast of industry developments and new technologies.
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Noida
Work from Office
About The Role : Role Purpose The purpose of this role is to interpret data and turn into information (reports, dashboards, interactive visualizations etc) which can offer ways to improve a business, thus affecting business decisions. Do 1. Managing the technical scope of the project in line with the requirements at all stages a. Gather information from various sources (data warehouses, database, data integration and modelling) and interpret patterns and trends b. Develop record management process and policies c. Build and maintain relationships at all levels within the client base and understand their requirements. d. Providing sales data, proposals, data insights and account reviews to the client base e. Identify areas to increase efficiency and automation of processes f. Set up and maintain automated data processes g. Identify, evaluate and implement external services and tools to support data validation and cleansing. h. Produce and track key performance indicators 2. Analyze the data sets and provide adequate information a. Liaise with internal and external clients to fully understand data content b. Design and carry out surveys and analyze survey data as per the customer requirement c. Analyze and interpret complex data sets relating to customers business and prepare reports for internal and external audiences using business analytics reporting tools d. Create data dashboards, graphs and visualization to showcase business performance and also provide sector and competitor benchmarking e. Mine and analyze large datasets, draw valid inferences and present them successfully to management using a reporting tool f. Develop predictive models and share insights with the clients as per their requirement Deliver NoPerformance ParameterMeasure1.Analyses data sets and provide relevant information to the clientNo. Of automation done, On-Time Delivery, CSAT score, Zero customer escalation, data accuracy
Posted 3 months ago
5 - 10 years
10 - 20 Lacs
Chennai
Work from Office
Min 5-8 years of experience in Hadoop/big data technologies. Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr). Hands-on experience with Python/Pyspark. Design, develop, and optimize ETL pipelines using Python and PySpark to process and transform large-scale datasets, ensuring performance and scalability on big data platforms. Implement big data solutions for Retail banking use cases such Risk analysis, Management Reporting (time series, Vintage curves, Executive summary) and regulatory reporting, while maintaining data accuracy and compliance standards. Collaborate with cross-functional teams to integrate data from various sources, troubleshoot production issues, and ensure efficient, reliable data processing operations.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2