Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7 - 8 years
15 - 25 Lacs
Chennai
Work from Office
Assistant Manager - Data Engineering: Job Summary: We are seeking a Lead GCP Data Engineer with experience in data modeling and building data pipelines. The ideal candidate should have hands-on experience with GCP services such as Composer, GCS, GBQ, Dataflow, Dataproc, and Pub/Sub. Additionally, the candidate should have a proven track record in designing data solutions, covering everything from data integration to end-to-end storage in bigquery. Responsibilities: Collaborate with Client's Data Architect: Work closely with client data architects and technical teams to design and develop customized data solutions that meet business requirements. Design Data Flows: Architect and implement data flows that ensure seamless data movement from source systems to target systems, facilitating real-time or batch data ingestion, processing, and transformation. Data Integration & ETL Processes: Design and manage ETL processes, ensuring the efficient integration of diverse data sources and high-quality data pipelines. Build Data Products in GBQ: Work on building data products using Google BigQuery (GBQ), designing data models and ensuring data is structured and optimized for analysis. Stakeholder Interaction: Regularly engage with business stakeholders to gather data requirements and translate them into technical specifications, building solutions that align with business needs. Ensure Data Quality & Security: Implement best practices in data governance, security, and compliance for both storage and processing of sensitive data. Continuous Improvement: Evaluate and recommend new technologies and tools to improve data architecture, performance, and scalability. Skills: 6+ years of development experience 4+ years of experience with SQL, Python 2+ GCP BigQuery, DataFlow, GCS, Postgres 3+ years of experience building out data pipelines from scratch in a highly distributed and fault-tolerant manner. Experience with CloudSQL, Cloud Functions and Pub/Sub, Cloud Composer etc., Familiarity with big data and machine learning tools and platforms. Comfortable with open source technologies including Apache Spark, Hadoop, Kafka. Comfortable with a broad array of relational and non-relational databases. Proven track record of building applications in a data-focused role (Cloud and Traditional Data Warehouse) Current or previous experience leading a team. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Posted 2 months ago
5 - 7 years
15 - 20 Lacs
Chennai
Hybrid
Usxi is Looking for Big Data Developers that will work on the collecting, storing, processing, and analyzing of huge sets of data. The Data Developers must also have exceptional analytical skills, showing fluency in the use of tools such as MySQL and strong Python, Shell, Java, PHP, and T-SQL programming skills. The candidate must also be technologically adept, demonstrating strong computer skills. Additionally, you must be capable of developing databases using SSIS packages, T-SQL, MSSQL, and MySQL scripts. The candidate will also have an ability to design, build, and maintain the businesss ETL pipeline and data warehouse. The candidate will also demonstrate expertise in data modeling and query performance tuning on SQL Server, MySQL, Redshift, Postgres or similar platforms. Key responsibilities will include: Develop and maintain data pipelines Design and implement ETL processes Hands on experience on Data Modeling – Design conceptual, logical and physical data models with type 1 and type2 dimension s. Knowledge to move the ETL code base from On-premise to Cloud Architecture Understanding data lineage and governance for different data sources Maintaining clean and consistent access to all our data sources Hands on experience to deploy the code using CI/CD pipelines Assemble large and complex data sets strategically to meet business requirements Enable business users to bring data-driven insights into their business decisions through reports and dashboards Required Qualifications: Hands on experience in big data technologies including Scala or Spark (Azure Databricks preferable), Hadoop, Hive, HDFS. Python, Java & SQL Knowledge of Microsoft’s Azure Cloud Experience and commitment to development and testing best practices. DevOps experience with continuous integration/delivery best-practices, technologies and tools. Experienced deploying Azure SQL Database, Azure Data Factory and well-acquainted with other Azure services including Azure Data Lake and Azure ML Experience implementing REST API calls and authentication Experienced working with agile project management methodologies Computer Science Degree/Diploma Microsoft Certified: DP203 - Azure Data Engineer Associate
Posted 2 months ago
3 - 8 years
8 - 18 Lacs
Navi Mumbai, Mumbai, Delhi
Work from Office
Kindly provide some cv s for Hadoop administrator: JD: Minimum 2 + years of experience in Hadoop Eco system 1. Good knowledge of Big Data /CDP architecture 2. Knowledge on Kafka replication and troubleshooting 3. Must have knowledge of setting HDFS, HIVE KAFKA and its troubleshooting 3. Must have knowledge of setting kerberos, ranger , Hbase 4. Must have good knowledge of Performance tuning of hadoop eco system. 5. Must have good knowledge of Hadoop Eco system up-gradation 6. Installation and configuration of MYSQL on Linux, Windows. 7. Understanding Hadoop technology HA and recovery 8. Ability to multi-task and context-switch effectively between different activities and teams 9. Provide 24x7 support for critical production systems. 10. Excellent written and verbal communication. 11. Ability to organize and plan work independently. 12. Ability to automate day-to-day tasks with Unix shell scripting.
Posted 2 months ago
5 - 10 years
7 - 17 Lacs
Hyderabad
Work from Office
About this role: Wells Fargo is seeking a Lead Software Engineer In this role, you will: Lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentor Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 4+ years of Spark development experience 4+ years of Scala/Java development for Spark focusing on functional programming paradigm Spark SQL, Streaming and dataframe/dataset API experience Spark query tuning and performance optimization SQL &NOSQL database integration with Spark (MS SQL server and MongoDB) Deep understanding of distributed systems (CAP theorem, partition and bucketing, replication memory layouts, consistency) Deep understanding of Hadoop Cloud platforms, HDFS, ETL/ELT process and Unix shell scripting Job Expectations: Experience in working Agile development methodology, GIT and JIRA Experience/working knowledge of technologies like Kafka, Cassandra, Oracle RDBMS and JSON structures Python development with/without Spark Experience of Banking/Financial domain
Posted 2 months ago
5 - 9 years
7 - 11 Lacs
Pune, Hinjewadi
Work from Office
Software Requirements: Proficient in Java (version 1.8 or higher), with a solid understanding of object-oriented programming and design patterns. Experience with Big Data technologies including Hadoop, Spark, Hive, HBase, and Kafka. Strong knowledge of SQL and NoSQL databases with experience in Oracle preferred. Familiarity with data processing frameworks and standards, such as JSON, Avro, and Parquet. Proficiency in Linux shell scripting and basic Unix OS knowledge. Experience with code versioning tools, such as Git, and project management tools like JIRA. Familiarity with CI/CD tools such as Jenkins or Team City, and build tools like Maven. Overall Responsibilities: Translate application storyboards and use cases into functional applications while ensuring high performance and responsiveness. Design, build, and maintain efficient, reusable, and reliable Java code. Develop high-performance and low-latency components to run Spark clusters and support Big Data platforms. Identify and resolve bottlenecks and bugs, proposing best practices and standards. Collaborate with global teams to ensure project alignment and successful execution. Perform testing of software prototypes and facilitate their transfer to operational teams. Conduct analysis of large data sets to derive actionable insights and contribute to advanced analytical model building. Mentor junior developers and assist in design solution strategies. Category-wise Technical Skills: Core Development Skills: Strong Core Java and multithreading experience. Knowledge of concurrency patterns and scalable application design principles. Big Data Technologies: Proficiency in Hadoop ecosystem components (HDFS, Hive, HBase, Apache Spark). Experience in building self-service platform-agnostic data access APIs. Analytical Skills: Demonstrated ability to analyze large data sets and derive insights. Strong systems analysis, design, and architecture fundamentals. Testing and CI/CD: Experience in unit testing and SDLC activities. Familiarity with Agile/Scrum methodologies for project management. Experience: 5 to 9 years of experience in software development, with a strong emphasis on Java and Big Data technologies. Proven experience in performance tuning and troubleshooting applications in a Big Data environment. Experience working in a collaborative global team setting. Day-to-Day Activity: Collaborate with cross-functional teams to understand and translate functional requirements into technical designs. Write and maintain high-quality, performant code while enforcing coding standards. Conduct regular code reviews and provide constructive feedback to team members. Monitor application performance and address issues proactively. Engage in daily stand-ups and sprint planning sessions to ensure alignment with team goals. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or related field. Relevant certifications in Big Data technologies or Java development are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration skills, with the ability to work effectively in a team. Ability to mentor others and share knowledge within the team. Strong organizational skills and attention to detail, with the ability to manage multiple priorities.
Posted 2 months ago
8 - 13 years
25 - 40 Lacs
Bengaluru
Hybrid
Job Title / Primary Skill: Big Data Developer (Lead/Associate Manager) Management Level: G150 Years of Experience: 8 to 13 years Job Location: Bangalore (Hybrid) Must Have Skills: Big data, Spark, Scala, SQL, Hadoop Ecosystem. Educational Qualification: BE/BTech/ MTech/ MCA, Bachelor or masters degree in Computer Science, Job Overview Overall Experience 8+ years in IT, Software Engineering or relevant discipline. Designs, develops, implements, and updates software systems in accordance with the needs of the organization. Evaluates, schedules, and resources development projects; investigates user needs; and documents, tests, and maintains computer programs. Job Description: We look for developers to have good knowledge of Scala programming skills and Knowledge of SQL Technical Skills: Scala, Python -> Scala is often used for Hadoop-based projects, while Python and Scala are choices for Apache Spark-based projects. SQL -> Knowledge of SQL (Structured Query Language) is important for querying and manipulating data Shell Script -> Shell scripts are used for batch processing of data, it can be used for scheduling the jobs and shell scripts are often used for deploying applications Spark Scala -> Spark Scala allows you to write Spark applications using the Spark API in Scala Spark SQL -> It allows to work with structured data using SQL-like queries and Data Frame APIs. We can execute SQL queries against Data Frames, enabling easy data exploration, transformation, and analysis. The typical tasks and responsibilities of a Big Data Developer include: 1. Data Ingestion: Collecting and importing data from various sources, such as databases, logs, APIs into the Big Data infrastructure. 2. Data Processing: Designing data pipelines to clean, transform, and prepare raw data for analysis. This often involves using technologies like Apache Hadoop, Apache Spark. 3. Data Storage: Selecting appropriate data storage technologies like Hadoop Distributed File System (HDFS), HIVE, IMPALA, or cloud-based storage solutions (Snowflake, Databricks).
Posted 2 months ago
4 - 7 years
9 - 13 Lacs
Chennai
Work from Office
Skills: Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Good to have : GCP Roles/Responsibilities: Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and backend engineers, product managers, and analysts. Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models. Designs data integrations and data quality framework. Basic Qualifications: BS or MS degree in Computer Science or a related technical field 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 4+ years of experience with Big Data Technologies like Spark, Hive 2+ years of experience on data engineering on Google Cloud platform services like big query. Roles and Responsibilities Skills: Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Good to have : GCP Roles/Responsibilities: Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and backend engineers, product managers, and analysts. Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models. Designs data integrations and data quality framework. Basic Qualifications: BS or MS degree in Computer Science or a related technical field 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 4+ years of experience with Big Data Technologies like Spark, Hive 2+ years of experience on data engineering on Google Cloud platform services like big query.
Posted 2 months ago
7 - 10 years
9 - 12 Lacs
Mumbai
Work from Office
Position Overview Synechron is seeking a skilled and experienced ETL Developer to join our team. The ideal candidate will have a strong proficiency in ETL tools, a deep understanding of big data technologies, and expertise in cloud data warehousing solutions. You will play a critical role in designing, developing, and maintaining ETL processes to ensure data integration and transformation for our high-profile clients. Software Requirements Proficiency in ETL tools such as Informatica. Strong experience with Hadoop ecosystem (HDFS, MapReduce, Hive, Pig). Expertise in cloud data warehousing solutions, specifically Snowflake. Knowledge of SQL and data modeling. Familiarity with data integration and transformation techniques. Understanding of data governance and data quality principles. Overall Responsibilities Design, develop, and maintain ETL processes to extract, transform, and load data from various sources into Snowflake and other data warehouses. Collaborate with data architects and analysts to understand data requirements and ensure data is delivered accurately and timely. Optimize ETL processes for performance and scalability. Implement data quality checks and data governance policies. Troubleshoot and resolve data issues and discrepancies. Document ETL processes and maintain technical documentation. Stay updated with industry trends and best practices in data engineering. Technical Skills ETL Tools Informatica PowerCenter, Informatica Cloud Big Data Technologies Hadoop (HDFS, MapReduce, Hive, Pig) Cloud Platforms Snowflake Database Management SQL, NoSQL databases Scripting Languages Python, Shell scripting Data Modeling Star Schema, Snowflake Schema Nice-to-Have Experience with data visualization tools (e.g., Tableau, Power BI) Familiarity with Apache Spark Knowledge of data lakes and data mesh architecture Experience Total Experience: 7-10 years Relevant Experience: Minimum of 5-9 years in Data ETL development or similar roles. Proven experience with Hadoop and data warehousing solutions, particularly Snowflake. Experience in working with large datasets and performance tuning of ETL processes. Day-to-Day Activities Design and implement ETL workflows based on business requirements. Monitor ETL jobs and troubleshoot any issues that arise. Collaborate with data analysts to provide data insights. Conduct code reviews and ensure best practices are followed. Participate in agile ceremonies (sprint planning, stand-ups, retrospectives). Provide support during data migration and integration projects. Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. Relevant certifications in ETL tools or data engineering (e.g., Informatica, Snowflake certifications) are a plus. Soft Skills Strong analytical and problem-solving skills. Effective communication and collaboration skills. Attention to detail and commitment to delivering high-quality results. Ability to work independently and in a team-oriented environment. Flexibility to adapt to changing priorities and technologies. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law .
Posted 2 months ago
14 - 22 years
45 - 75 Lacs
Bengaluru
Remote
Architecture design, total solution design from requirements analysis, design and engineering for data ingestion, pipeline, data preparation & orchestration, applying the right ML algorithms on the data stream and predictions. Responsibilities: Defining, designing and delivering ML architecture patterns operable in native and hybrid cloud architectures. Research, analyze, recommend and select technical approaches to address challenging development and data integration problems related to ML Model training and deployment in Enterprise Applications. Perform research activities to identify emerging technologies and trends that may affect the Data Science/ ML life-cycle management in enterprise application portfolio. Implementing the solution using the AI orchestration Requirements: Hands-on programming and architecture capabilities in Python, Java, Minimum 6+ years of Experience in Enterprise applications development (Java, . Net) Experience in implementing and deploying Experience in building Data Pipeline, Data cleaning, Feature Engineering, Feature Store Experience in Data Platforms like Databricks, Snowflake, AWS/Azure/GCP Cloud and Data services Machine Learning solutions (using various models, such as Linear/Logistic Regression, Support Vector Machines, (Deep) Neural Networks, Hidden Markov Models, Conditional Random Fields, Topic Modeling, Game Theory, Mechanism Design, etc. ) Strong hands-on experience with statistical packages and ML libraries (e. g. R, Python scikit learn, Spark MLlib, etc. ) Experience in effective data exploration and visualization (e. g. Excel, Power BI, Tableau, Qlik, etc. ) Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc. ) Hands on experience in RDBMS, NoSQL, big data stores like: Elastic, Cassandra, Hbase, Hive, HDFS Work experience as Solution Architect/Software Architect/Technical Lead roles Experience with open-source software. Excellent problem-solving skills and ability to break down complexity. Ability to see multiple solutions to problems and choose the right one for the situation. Excellent written and oral communication skills. Demonstrated technical expertise around architecting solutions around AI, ML, deep learning and related technologies. Developing AI/ML models in real-world environments and integrating AI/ML using Cloud native or hybrid technologies into large-scale enterprise applications. In-depth experience in AI/ML and Data analytics services offered on Amazon Web Services and/or Microsoft Azure cloud solution and their interdependencies. Specializes in at least one of the AI/ML stack (Frameworks and tools like MxNET and Tensorflow, ML platform such as Amazon SageMaker for data scientists, API-driven AI Services like Amazon Lex, Amazon Polly, Amazon Transcribe, Amazon Comprehend, and Amazon Rekognition to quickly add intelligence to applications with a simple API call). Demonstrated experience developing best practices and recommendations around tools/technologies for ML life-cycle capabilities such as Data collection, Data preparation, Feature Engineering, Model Management, MLOps, Model Deployment approaches and Model monitoring and tuning. Back end: LLM APIs and hosting, both proprietary and open-source solutions, cloud providers, ML infrastructure Orchestration: Workflow management such as LangChain, Llamalndex, HuggingFace, OLLAMA Data Management : LLM cache Monitoring: LLM Ops tool Tools & Techniques: prompt engineering, embedding models, vector DB, validation frameworks, annotation tools, transfer learnings and others Pipelines: Gen AI pipelines and implementation on cloud platforms (preference: Azure data bricks, Docker Container, Nginx, Jenkins)
Posted 2 months ago
1 - 3 years
3 - 5 Lacs
Pune
Work from Office
What you'll do: As part of our full-stack product engineering team, you will build multi-tenant cloud-based software products/platforms and internal assets that will leverage cutting edge based on the Amazon AWS cloud platform. Pair program, write unit tests, lead code reviews, and collaborate with QA analysts to ensure you develop the highest quality multi-tenant software that can be productized. Work with junior developers to implement large features that are on the cutting edge of Big Data Be a technical leader to your team, and help them improve their technical skills Stand up for engineering practices that ensure quality products: automated testing, unit testing, agile development, continuous integration, code reviews, and technical design Work with product managers and architects to design product architecture and to work on POCs Take immediate responsibility for project deliverables Understand client business issues and design features that meet client needs Undergo on-the-job and formal trainings and certifications, and will constantly advance your knowledge and problem solving skills What you'll bring: 1-3 years of experience in developing software, ideally building SaaS products and services Bachelor's Degree in CS, IT, or related discipline Strong analytic, problem solving, and programming ability Good hands on to work with AWS services (EC2, EMR, S3, Serverless stack, RDS, Sagemaker, IAM, EKS etc) Experience in coding in an object-oriented language such as Python, Java, C# etc. Hands on experience on Apache Spark, EMR, Hadoop, HDFS, or other big data technologies Experience with development on the AWS (Amazon Web Services) platform is preferable Experience in Linux shell or PowerShell scripting is preferable Experience in HTML5, JavaScript, and JavaScript libraries is preferable Good to have Pharma domain understanding Initiative and drive to contribute Excellent organizational and task management skills Strong communication skills Ability to work in global cross-office teams ZS is a global firm; fluency in English is required .
Posted 2 months ago
4 - 9 years
7 - 12 Lacs
Bengaluru
Work from Office
Practice Overview : Skill/Operating Group Technology Consulting Level Consultant Location Gurgaon/Mumbai/ Bangalore Travel Percentage Expected Travel could be anywhere between 0-100% Why Technology Consulting The Technology Consulting business within Capability Network invents the future for clients by providing them with the right guidance, design thinking and innovative solutions for technological transformation. As technology rapidly evolves, it's more important than ever to have an innovation advisor who can create a new vision or put one into placeto solve the client's toughest business problems. Specialize in management or technology consulting to transform the world's leading organizations by building innovative business solutions as their trusted advisor by: Helping Clients :Rethinking IT and Digital ecosystems and innovating to help clients keep pace with fast-changing, customer-driven environments. Enhancing your Skillset :Building expertise and innovating with leading-edge technologies such as Blockchain, Artificial Intelligence and Cloud. Transforming Businesses :Developing customized, next-generation products and services that help clients shift to new business models designed for today's connectedlandscape of disruptive technologies Principal Duties And Resp o nsibilities: Working closely with our clients, Consulting professionals design, build and implement strategies , POC that can help enhance business performance. They develop specialized expertise strategic, industry, functional, technicalin a diverse project environment that offers multiple opportunities for career growth. The opportunities to make a difference within exciting client initiatives are limitless in this ever-changing business landscape. Here are just a few of your day-to-day responsibilities. Identifying, Assessing and solvingcomplex business problemsfor area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors. Interact with client stakeholders to understand their AI problems, priority use-cases, define a problem statement, understand the scope of the engagement, and also drive projects to deliver value to the client Understand client's business and IT goals, vision and identify opportunities for reinvestment Through your expertise and experience, guide your team-mates to suggest the right solutions to meet the needs of clients and help draw up practical implementation road maps that position them for long-term success Benchmark against global research benchmarks and leading industry peers to understand current & recommend AI solutions Conduct discovery workshops and design sessions to elicit AI opportunities & client pain areas. Experience in designing & developing Enterprise wide AI architecture & strategy Contribute towards practice development initiatives like new offering development, people management etc Qualifications Qualifications: MBA Degree from Tier-1 College (Preferable) Bachelors Degree AI/ML/Cloud AI Certifications preferred Minimum of4-9years large scale consulting experience and managing teams in a consulting environment or at high tech companies Experience: We are looking for Advanced Analytics and AI Consultants to join our growing team having experience with Machine learning and NLP applications . Operating across all stages of the innovation spectrum, with a view to build the futur e . Experience in designing and building end-to-end AI Strategy & solutions using Cloud & Non-Cloud AI ML, NLP algo. Assessment: Works as part of multi-disciplinary teams and responsible for independently driving specific AI workstreams through collaboration with designers, other AI team members, platform & Data engineers, business subject matter experts and technology delivery teams to assess AI potential and develop use cases. Liaises effectively with global community and senior leadership to identify and drive value opportunities around AI, support investments in POCs, pilots and chargeable projects. Design: The Consultantsrole on these projects centers around the application of analytics, data science and advanced cognitive methods to derive insights from data and develop POCs and appropriate high-level solution designs.Strong expertise in designing AutoML, MLOps AI solutions using any of the Cloud services (AWS, GCP, Azure). Architecture: The nature of these projects change as ideas and concepts mature, ranging from research, proofs-of-concept and the art-of-the-possible to the delivery of real-world applications for our clients. Focus in this area is on the impact to clients technology landscape/ architecture and ensuring formulation of relevant guiding principles and platform components. Expertise in designing Enterprise AI strategy, Responsible AI frameworks is preferred. Strong experience handling multiple end-to-end AI Lifecycle projects from Data preparation, Modeling, Build, Train & Deploy. Product/ Framework/ Tools evaluation: Collaborate with business experts for business understanding, working with other consultants and platform engineers for solutions and with technology teams for prototyping and client implementations.Evaluate existing products and frameworks available and develop options for proposed solutions. Have an understanding of the framework/ tools to engage the client in meaningful discussions and steer towards the right recommendation. Modeling : Strong SME knowledge in Data Preparation, Feature engineering, Feature selection, Training datasets, Algorithmic selection, Optimizing & Production deployment. Work as a technical SME to teams and clienton advising right AI solutions. The Consultant should have practical industry expertise. The areas of Financial Services, Retail, Telecommunications, Life Sciences and Resources are of interest but experience in equivalent domains is also welcomed. Consultants should understand the key technology trends in their domain and the related business implications. Key Competencies and Skills: Strong desire to work in technology-driven business transformation Strong knowledge of technology trends across IT and digital and how they can be applied to companies to address real world problems and opportunities. Exceptional interpersonal and presentation skills - ability to convey technology and business value propositions to senior stakeholders Team oriented and collaborative working style, both with clients and those within the organization. Capacity to develop high impact thought leadership that articulates a forward thinking view of the market. Ability to develop and maintain strong internal and client relationships Proven track record in working creatively and analytically in a problem-solving environment Proven success in contributing to a team-oriented environment with effective consulting skills Proven track record to quickly understand the key value drivers of a business, how they impact the scope and approach of the engagement Flexibility to accommodate client travel requirements Technical Skills Good exposure to AI/NLP/ML algorithms and building models/chatbots/solutions. Expert in developing AI solutions using Cloud AI services (AWS, GCP, Azure). Strong understanding of entire ML project lifecycle from business issue identification, data audit to model maintenance in production Experience in conducting client workshops and AI use-case development & prioritization Strong knowledge on AI frameworks & algorithms Strong & relevant experience in end-to-end AI lifecycle from Data capture, Data preparation, Model planning, Model selection, Model build, train & test & Model Deployment Expert in designing AI pipelines, Feature engg, Feature selection, Labeling, Training & Optimizing models Good understanding on Scaling/Industrializing AI, Enterprise AI, Lean AI concepts, frameworks & tools Familiarity with some distributed storage frameworks and architectures Machine Learning Language: Python/Scala/R/SAS knowledge Virtual Agents NLU, NLG, Text to speech ML Libraries SciKit learn, Stanford, NLTK, Numpy, PyTorchetc Familiarity with cloud AI and Data offerings Database: HDFS, NoSQL, In-memory (Spark), Neo4j Experience with any of the platforms like IBM Watson, Microsoft ML, Google Cloud AI or AWS ML
Posted 3 months ago
5 - 9 years
19 - 25 Lacs
Pune, Bengaluru, Hyderabad
Work from Office
We are looking for "Data Engineer (Hadoop Cloudera + PySpark + Python)" with Minimum 5 years experience Contact- Yashra (95001 81847) Required Candidate profile Experience on Ingestion framework Hadoop Cloudera , HDFS, HIVE Language – Python, Pyspark
Posted 3 months ago
4 - 8 years
9 - 13 Lacs
Hyderabad
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Developer to carry out coding or programming of Hadoop applications and developing software using Hadoop technologies like Spark, Scala, Python, Hbase, Hive, Cloudera. In this role, you will need to concentrate in creating, testing, implementing, and monitoring applications designed to meet the organization?s strategic goals. What You?ll Do Develop (Coding) for Hadoop, Spark and Java and Angular Js Collaborate with like-minded team members to establish best practices, identify optimal technical solutions (20%) Review code and provide feedback relative to best practices; improve performance Design, develop and test a large-scale, custom distributed software system using the latest Java, Scala and Big Data technologies Adhere to appropriate SDLC and Agile practices Contribute actively to the technological strategy definition (design, architecture and interfaces) in order to effectively respond to our client?s business needs Participate in technological watch and the definition of standards to ensure that our systems and data warehouses are efficient, resilient and durable Provide guidance and coaching to associate software developers Use Informatica or similar products, with an understanding of heterogeneous data replication technique Conduct performance tuning, improvement, balancing, usability and automation Expertise You?ll Bring Experience developing code on distributed databases using Spark, HDFS, Hive 3+ years of experience in Application Developer / Data Architect, or equivalent role Strong knowledge of data and data models Good understanding of data consumption patterns by business users Solid understanding of business processes and structures Basic knowledge of the securities trading business and risk Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential at Persistent. See Beyond, Rise Above.
Posted 3 months ago
4 - 8 years
9 - 13 Lacs
Bengaluru
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Developer to carry out coding or programming of Hadoop applications and developing software using Hadoop technologies like Spark, Scala, Python, Hbase, Hive, Cloudera. In this role, you will need to concentrate in creating, testing, implementing, and monitoring applications designed to meet the organization?s strategic goals. What You?ll Do Develop (Coding) for Hadoop, Spark and Java and Angular Js Collaborate with like-minded team members to establish best practices, identify optimal technical solutions (20%) Review code and provide feedback relative to best practices; improve performance Design, develop and test a large-scale, custom distributed software system using the latest Java, Scala and Big Data technologies Adhere to appropriate SDLC and Agile practices Contribute actively to the technological strategy definition (design, architecture and interfaces) in order to effectively respond to our client?s business needs Participate in technological watch and the definition of standards to ensure that our systems and data warehouses are efficient, resilient and durable Provide guidance and coaching to associate software developers Use Informatica or similar products, with an understanding of heterogeneous data replication technique Conduct performance tuning, improvement, balancing, usability and automation Expertise You?ll Bring Experience developing code on distributed databases using Spark, HDFS, Hive 3+ years of experience in Application Developer / Data Architect, or equivalent role Strong knowledge of data and data models Good understanding of data consumption patterns by business users Solid understanding of business processes and structures Basic knowledge of the securities trading business and risk Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential at Persistent. See Beyond, Rise Above.
Posted 3 months ago
4 - 8 years
9 - 13 Lacs
Pune
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Developer to carry out coding or programming of Hadoop applications and developing software using Hadoop technologies like Spark, Scala, Python, Hbase, Hive, Cloudera. In this role, you will need to concentrate in creating, testing, implementing, and monitoring applications designed to meet the organization?s strategic goals. What You?ll Do Develop (Coding) for Hadoop, Spark and Java and Angular Js Collaborate with like-minded team members to establish best practices, identify optimal technical solutions (20%) Review code and provide feedback relative to best practices; improve performance Design, develop and test a large-scale, custom distributed software system using the latest Java, Scala and Big Data technologies Adhere to appropriate SDLC and Agile practices Contribute actively to the technological strategy definition (design, architecture and interfaces) in order to effectively respond to our client?s business needs Participate in technological watch and the definition of standards to ensure that our systems and data warehouses are efficient, resilient and durable Provide guidance and coaching to associate software developers Use Informatica or similar products, with an understanding of heterogeneous data replication technique Conduct performance tuning, improvement, balancing, usability and automation Expertise You?ll Bring Experience developing code on distributed databases using Spark, HDFS, Hive 3+ years of experience in Application Developer / Data Architect, or equivalent role Strong knowledge of data and data models Good understanding of data consumption patterns by business users Solid understanding of business processes and structures Basic knowledge of the securities trading business and risk Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential at Persistent. See Beyond, Rise Above.
Posted 3 months ago
6 - 10 years
13 - 17 Lacs
Pune
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities Let's unleash your full potential. See Beyond, Rise Above
Posted 3 months ago
6 - 10 years
13 - 17 Lacs
Bengaluru
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above
Posted 3 months ago
6 - 10 years
13 - 17 Lacs
Hyderabad
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above
Posted 3 months ago
5 - 10 years
15 - 25 Lacs
Bengaluru
Hybrid
Required skills- - Relevant experience with Scala-Spark Big Data development. - Strong Database experience preferably with Hadoop, DB2, or Sybase. - Good understanding of Hadoop (HDFS) eco-system - Complete SDLC process and Agile Methodology (Scrum) - Strong oral and written communication skills - Experience working within a scrum team - Excellent interpersonal skills and professional approach - The ability to investigate and solve technical problems in the context of supporting production applications - Hands-on Data Mining and analytical work experience, big data or Scala on Spark - Unix OS, Scripting, Python - Good understanding of DevOps concepts including working experience of CI/CD tools like Jenkins
Posted 3 months ago
8 - 10 years
12 - 16 Lacs
Ahmedabad
Work from Office
System Monitoring and Incident Response: for implementing monitoring solutions to track system health, performance, and availability. They proactively monitor systems, identify issues, and respond to incidents promptly, working to minimize downtime and mitigate impacts. Post-Incident Analysis: Led incident response efforts, coordinated with cross-functional teams, and conducted post-incident analysis to identify root causes and implement preventive measures. Continuous Improvement and Reliability Engineering: SREs drive continuous improvement efforts by identifying areas for enhancement, implementing best practices, and fostering a culture of reliability engineering. They participate in post-mortems, conduct blameless retrospectives, and drive initiatives to improve system reliability, stability, and maintainability. Collaboration and Knowledge Sharing: SREs collaborate closely with software engineers, operations teams, and other stakeholders to ensure smooth coordination and effective communication. They share knowledge, provide technical guidance, and contribute to the development of a strong engineering culture. Support and maintain configuration management for various applications and systems Implement comprehensive service monitoring, including dashboards, metrics, and alerts Define, measure, and meet key service level objectives, such as uptime, performance, incidents, and chronic problems Partner with application and business stakeholders to ensure high quality product development and release Collaborate with the development team to enhance system reliability and performance.Bachelors degree in Information Technology, Computer Science, or related field. Strong knowledge of software development processes and procedures. Strong problem-solving abilities. Excellent understanding of computer systems, servers, and network systems. Ability to work under pressure and manage multiple tasks simultaneously. Strong communication and interpersonal skills. Strong knowledge of coding languages like Python, Java, Go, etc. Ability to program (structured and OOP) using one or more high-level languages, such as Python, Java, C/C++, Ruby, and JavaScript Experience with distributed storage technologies such as NFS, HDFS, Ceph, and Amazon S3, as well as dynamic resource management frameworks (Apache Mesos, Kubernetes,Yarn)
Posted 3 months ago
1 - 6 years
8 - 16 Lacs
Bengaluru
Work from Office
Openings for Engineer Development with a product based company in bangalore. Total exp: 1-6 years Notice period: At earliest Job location: Bangalore Mode of employment: Permanent and Work from office Role & responsibilities: Must have worked in Core Java, Hibernate, multi-threading for Back-end Development. Must have sound knowledge of data structures and algorithms. Knowledge/experience in developing the products from the starch is an added advantage. Should have analytical bend of mind and good problem-solving capabilities Knowledge of Big data technologies like Hadoop & HDFS would be advantageous Preferably from a Product development background. If you are interested please reach out to deeksha.bharadwaj@pelatro.com along with the below details. Required details: Total exp: Relevant exp: Notice period duration: Current CTC: Expected CTC: Reason for Change of company: Current company: Current location: Regards, deeksha.bharadwaj@pelatro.com
Posted 3 months ago
4 - 8 years
5 - 15 Lacs
Mumbai
Work from Office
Technology Analyst Development, Analysis, Modelling, Support Mandatory skills* HDFS, Ozone, Hive, Impala, Spark, Atlas, Ranger Kafka, Flink, Spark Streaming Java, Python/PySpark Experience with CI/CD (GitLab/GitHub, Jenkins, Ansible, Nexus) for automated build & test Excellent communication Bachelor's degree in Computer Science or equivalent, Software Engineering, or a related field
Posted 3 months ago
4 - 8 years
5 - 15 Lacs
Bengaluru
Work from Office
Technology Analyst Development, Analysis, Modelling, Support Mandatory skills* HDFS, Ozone, Hive, Impala, Spark, Atlas, Ranger Kafka, Flink, Spark Streaming Java, Python/PySpark Experience with CI/CD (GitLab/GitHub, Jenkins, Ansible, Nexus) for automated build & test Excellent communication Bachelor's degree in Computer Science or equivalent, Software Engineering, or a related field
Posted 3 months ago
1 - 2 years
4 - 7 Lacs
Pune
Work from Office
Design and optimize distributed data pipelines using Java and Apache Spark/Flink Build and manage scalable Data Lake solutions (AWS S3, HDFS, etc.) Implement cloud-based data processing solutions on AWS, Azure, or GCP Collaborate with teams to integrate and improve data workflows What We re Looking For: 5+ years of experience in Java development with expertise in distributed systems Strong hands-on experience with Apache Spark or Apache Flink Experience working with Data Lake technologies (e.g., AWS S3, HDFS) Familiarity with cloud platforms (AWS, Azure, GCP) and data formats (Parquet, Avro) Strong knowledge of NoSQL databases and CI/CD practices Nice-to-Have: Experience with Docker , Kubernetes , and Apache Kafka Knowledge of data governance and security best practices
Posted 3 months ago
3 - 6 years
5 - 8 Lacs
Pune
Work from Office
Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark,Dataproc,Dataflow,BigQuery,Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (HDFS, BigQuery, etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at least: Spark, Java and Scala, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2