Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You are an experienced StreamSets and Snowflake Developer who will be responsible for designing and implementing robust, scalable data pipelines and efficient data management solutions. Your proficiency in StreamSets, Snowflake, Python, and SQL will be key in building modern data platforms and automation using CI/CD tools. **Key Responsibilities:** - **Technical Delivery:** - Design and implement scalable data pipelines in StreamSets to migrate data from SQL Server to Snowflake. - Develop and manage StreamSets pipelines using the StreamSets Python SDK. - Build stored procedures in Snowflake using SQL, Python, or JavaScript. - Create and manage Snowflake virtual warehouses, tasks, streams, and Snowpipe. - Upgrade and manage StreamSets Data Collector engines using Docker. - Deploy and manage credentials in Azure Key Vault, integrating with StreamSets. - **Optimization & Security:** - Apply performance tuning and optimization techniques for Snowflake warehouses. - Implement data security via RBAC, masking policies, and row-level access policies in Snowflake. - **Development Practices:** - Write clean, maintainable, and secure code in Python and SQL. - Conduct unit testing and manage defect lifecycle (raise, fix, retest). - Estimate time, effort, and resources for assigned tasks. - **Collaboration & Mentorship:** - Mentor junior developers and provide feedback via FAST goal-setting. - Actively engage with customers to understand business problems and propose effective solutions. - **CI/CD & Automation:** - Automate deployment and integration using GitHub Actions and Azure DevOps pipelines. - Identify opportunities for process improvement and automation in agile delivery models. **Outputs Expected:** - Clean, tested code with proper documentation - Updated knowledge repositories (e.g., SharePoint, wikis) - Accurate task estimates and timely delivery - Regular status updates in line with project standards - Compliance with release and configuration management processes **Skills Required:** **Must-Have Technical Skills:** - StreamSets: Strong knowledge of StreamSets architecture and pipeline design - Snowflake: Experience with Snowflake SQL, Snowpark, Snowpipe, RBAC, masking & row access policies - Python: Developing data pipelines and stored procedures - SQL: Strong experience writing complex SQL queries - ETL/ELT: Deep understanding of data integration concepts - Docker: Managing containers and deploying StreamSets Data Collector - CI/CD: GitHub Actions and Azure DevOps pipeline experience **Additional Technical Knowledge:** - Experience with stored procedures in SQL, Python, and JavaScript - Working knowledge of Agile methodologies - Familiarity with data security best practices - Operating systems, IDEs, and database platforms - Knowledge of customer domain and sub-domain **Soft Skills:** - Excellent verbal and written communication - Team collaboration and mentoring capabilities - Proactive problem-solving - Customer-centric mindset **Performance Metrics:** - Adherence to coding standards, SLAs, and timelines - Minimal post-delivery defects - Timely bug resolution - Completion of training and certifications - Knowledge contribution and documentation quality,
Posted 1 day ago
3.0 - 7.0 years
5 - 13 Lacs
hyderabad, pune, chennai
Work from Office
years of hands-on experience with IBM StreamSets (Data Collector & Transformer). - Strong understanding of pipeline performance tuning (e.g., stage optimization, buffer tuning, cluster resource allocation).
Posted 1 day ago
5.0 - 8.0 years
10 - 14 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Snowflake Data Warehouse Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in ensuring that applications meet the required standards and deliver value to the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Strong understanding of data warehousing concepts and architecture.- Experience with ETL processes and data integration techniques.- Familiarity with SQL, Python, Streamsets and data querying languages.- Ability to analyze and optimize database performance. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
5.0 - 8.0 years
5 - 9 Lacs
pune
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Mandatory Skills: StreamSets . Experience: 5-8 Years .
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Wipro Limited is a leading technology services and consulting company dedicated to developing innovative solutions that cater to clients" most intricate digital transformation requirements. With a vast global presence, Wipro aims to help customers, colleagues, and communities thrive in a rapidly evolving world. As a Consultant at Wipro, you will play a pivotal role in consulting projects, expected to complete specific tasks independently while developing expertise in core areas. Your responsibilities will include analyzing, designing, and developing components using Databricks and StreamSets, collaborating with teams to build data models, developing solutions to publish/subscribe Kafka topics, and participating in application deployments and reviews. You will be required to possess strong skills in Kafka, StreamSets, Databricks, and SQL, along with experience in .NET Technologies, Microsoft Visual Studio, API, and Oracle/PL-SQL. The role also demands the ability to interpret complex business requirements and translate them into application logic effectively. The ideal candidate should have 3-5 years of experience with mandatory skills in StreamSets, and a willingness to embrace reinvention and continuous evolution in a dynamic work environment. Join Wipro and embark on a journey of reinventing yourself, your career, and your skills in a purpose-driven organization that values diversity and encourages personal growth.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Engineer - Azure at NTT DATA in Chennai, Tamil Nadu (IN-TN), India (IN), you will be responsible for designing and implementing tailored data solutions to meet customer needs across various use cases, from streaming to data lakes, analytics, and more in a constantly evolving technical environment. Your key responsibilities will include providing thought leadership by recommending the most suitable technologies and solutions for different use cases while ensuring performance, security, scalability, and robust data integrations. You will be proficient in coding skills using languages such as Python, Java, and Scala to efficiently move solutions into production. Collaboration across diverse technical stacks, including Azure and Databricks, will be essential. You will also develop and deliver detailed presentations to effectively communicate complex technical concepts and generate comprehensive solution documentation. Adhering to Agile practices throughout the solution development process, you will design, build, and deploy databases and data stores to meet organizational requirements. The ideal candidate will have a minimum of 5 years of experience in supporting Software Engineering, Data Engineering, or Data Analytics projects and be willing to travel at least 25% of the time. Preferred skills include demonstrated production experience in core data platforms like Azure and Databricks, hands-on knowledge of Cloud and Distributed Data Storage, expertise in Azure data services, ADLS, ADF, and a strong understanding of Data integration technologies including Spark, Kafka, eventing/streaming, among others. Effective communication skills, both written and verbal, are crucial for conveying complex technical concepts. An undergraduate or graduate degree is preferred for this role. Join NTT DATA, a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, we are committed to helping clients innovate, optimize, and transform for long-term success. With experts in over 50 countries and a robust partner ecosystem, our services range from business and technology consulting to data and artificial intelligence solutions. Be a part of our mission to help organizations and society confidently transition into the digital future. Visit us at us.nttdata.com.,
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
chennai, tamil nadu, india
Remote
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer - Azure to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Job Duties: Key Responsibilities: . Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. . Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. . Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. . Collaborate seamlessly across diverse technical stacks, including Azure, and Databricks . Develop and deliver detailed presentations to effectively communicate complex technical concepts. . Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. . Adhere to Agile practices throughout the solution development process. . Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: . 5+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. . Ability to travel at least 25%. Preferred Skills: . Demonstrate production experience in core data platforms such as Azure, Databricks . Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in Azure data services, ADLS, ADF, Databricks, Data quality, ETL / ELT . Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. . Showcase professional written and verbal communication skills to effectively convey complex technical concepts. . Undergraduate or Graduate degree preferred Minimum Skills Required: Key Responsibilities: . Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. . Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. . Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. . Collaborate seamlessly across diverse technical stacks, including Azure, and Databricks . Develop and deliver detailed presentations to effectively communicate complex technical concepts. . Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. . Adhere to Agile practices throughout the solution development process. . Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: . 5+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. . Ability to travel at least 25%. Preferred Skills: . Demonstrate production experience in core data platforms such as Azure, Databricks . Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in Azure data services, ADLS, ADF, Databricks, Data quality, ETL / ELT . Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. . Showcase professional written and verbal communication skills to effectively convey complex technical concepts. . Undergraduate or Graduate degree preferred About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, . NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You will be joining Birlasoft, a leading organization at the forefront of merging domain expertise, enterprise solutions, and digital technologies to redefine business outcomes. Emphasizing a consultative and design thinking approach, we drive societal progress by empowering customers to operate businesses with unparalleled efficiency and innovation. As a part of the esteemed multibillion-dollar CKA Birla Group, Birlasoft, comprising a dedicated team of over 12,500 professionals, is dedicated to upholding the Group's distinguished 162-year legacy. At our foundation, we prioritize Diversity, Equity, and Inclusion (DEI) practices, coupled with Corporate Sustainable Responsibility (CSR) initiatives, demonstrating our dedication to constructing not only businesses but also inclusive and sustainable communities. Come join us in shaping a future where technology seamlessly aligns with purpose. We are currently looking for a skilled and proactive StreamSets or Denodo Platform Administrator to manage and enhance our enterprise data engineering and analytics platforms. This position requires hands-on expertise in overseeing large-scale Snowflake data warehouses and StreamSets data pipelines, with a focus on robust troubleshooting, automation, and monitoring capabilities. The ideal candidate will ensure platform reliability, performance, security, and compliance while collaborating closely with various teams such as data engineers, DevOps, and support teams. The role will be based in Pune, Hyderabad, Noida, or Bengaluru, and requires a minimum of 5 years of experience. Key Requirements: - Bachelors or masters in computer science, IT, or related field (B.Tech. / MCA preferred). - Minimum of 3 years of hands-on experience in Snowflake administration. - 5+ years of experience managing StreamSets pipelines in enterprise-grade environments. - Strong familiarity with AWS services, particularly S3, IAM, Lambda, and EC2. - Working knowledge of ServiceNow, Jira, Git, Grafana, and Denodo. - Understanding of data modeling, ETL/ELT best practices, and modern data platform architectures. - Experience with DataOps, DevSecOps, and cloud-native deployment principles is advantageous. - Certification in Snowflake or AWS is highly desirable. If you possess the required qualifications and are passionate about leveraging your expertise in platform administration to drive impactful business outcomes, we invite you to apply and be part of our dynamic team at Birlasoft.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
You are looking for a highly skilled and proactive Snowflake & StreamSets Platform Administrator to support and enhance enterprise data engineering and analytics platforms. In this role, you will be responsible for managing large-scale Snowflake data warehouses and StreamSets data pipelines. Your expertise should include strong troubleshooting, automation, and monitoring capabilities. Your primary focus will be on ensuring platform reliability, performance, security, and compliance. Collaboration with cross-functional teams such as data engineers, DevOps, and support teams will be essential. The ideal candidate should have at least 5 years of experience in this field and be comfortable working in a Full-time role with 24x7 Support (Rotational Shifts). The position can be based in Pune, Hyderabad, Noida, or Bengaluru. If you are a detail-oriented individual with a passion for data management and platform administration, and possess the necessary technical skills, this opportunity could be a perfect fit for you. Join our team at Birlasoft Office in Noida, India, and contribute to the success of our data engineering and analytics initiatives.,
Posted 1 month ago
5.0 - 8.0 years
15 - 20 Lacs
Pune
Work from Office
Calling all innovators find your future at Fiserv, Were Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day quickly, reliably, and securely Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, were involved If you want to make an impact on a global scale, come make a difference at Fiserv, Job Title Tech Lead, Data Architecture What does a successful Snowflakes Advisor do We are seeking a highly skilled and experienced Snowflake Advisor to take ownership of our data warehousing strategy, implementation, maintenance and support In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes, As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization, What You Will Do Define and implement best practices for data modelling, schema design, query optimization in Snowflakes Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflakes from various resources Integrate data from diverse systems like databases, API`s, flat files, cloud storage etc into Snowflakes, Using tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management, Manage Snowflakes caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaboration with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for Automation, deployment and monitoring Plan and execute strategies for scaling Snowflakes environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflakes with BI Tools like Power BI and create Dashboards Support ad hoc query requests while maintaining system performance Creating and maintaining documentation related to data warehouse architecture, data flow, and processes Providing technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflakes queries and manage Performance Keeping up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflakes based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflakes implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps, Act as point of contact for Snowflakes related queries, issues and initiatives What You Will Need To Have Must have 8 to 10 years of experience in data management tools like Snowflakes, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk, Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identifies issues, Demonstrate the ability to present information effectively in communications with peers and project management team, Highly Organized and works well in a fast paced, fluid and dynamic environment, What Would Be Great To Have Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development Life cycle best practices Thank You For Considering Employment With Fiserv Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable), Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law, Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements Please do not send resumes to Fiserv associates Fiserv is not responsible for any fees associated with unsolicited resume submissions, Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information Any communications from a Fiserv representative will come from a legitimate Fiserv email address, Show
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Quality Engineer Data to ensure the reliability, accuracy, and performance of data pipelines and AI/ML models within our SmartFM platform . This role is critical to delivering trusted data and actionable insights that drive smart building optimization and operational efficiency. Key Responsibilities: Design and implement robust QA strategies for data pipelines , ML models , and agentic workflows . Test and validate data ingestion and streaming systems (e.g., StreamSets , Kafka ) for accuracy, completeness, and resilience. Ensure data integrity and schema validation within MongoDB and other data stores. Collaborate with Data Engineers to proactively identify and resolve data quality issues. Partner with Data Scientists to validate ML/DL/LLM model performance, fairness, and robustness. Automate testing processes using frameworks such as Pytest , Great Expectations , and Deepchecks . Monitor production pipelines for anomalies, data drift , and model degradation . Participate in code reviews , QA audits, and maintain comprehensive documentation of test plans and results. Continuously evaluate and improve QA processes based on industry best practices and emerging trends. Required Technical Skills: 510 years of QA experience with a focus on data validation and ML model testing . Strong command of SQL for complex data queries and integrity checks. Practical experience with StreamSets , Kafka , and MongoDB . Proficient in Python scripting for automation and testing. Familiarity with ML testing metrics , model validation techniques, and bias detection . Exposure to cloud platforms such as Azure , AWS , or GCP . Working knowledge of QA tools like Pytest , Great Expectations , and Deepchecks . Understanding of Node.js and React-based applications is an added advantage. Additional Qualifications: Excellent communication , documentation , and cross-functional collaboration skills. Strong analytical mindset and high attention to detail. Ability to work with cross-disciplinary teams including Engineering, Data Science, and Product. Passion for continuous learning and adoption of new QA tools and methodologies. Domain knowledge in facility management , IoT , or building automation systems is a strong plus.
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a Tech Lead, Data Architecture at Fiserv, you will play a crucial role in our data warehousing strategy and implementation. Your responsibilities will include designing, developing, and leading the adoption of Snowflake-based solutions to ensure efficient and secure data systems that drive our business analytics and decision-making processes. Collaborating with cross-functional teams, you will define and implement best practices for data modeling, schema design, and query optimization in Snowflake. Additionally, you will develop and manage ETL/ELT workflows to ingest, transform, and load data from various resources into Snowflake, integrating data from diverse systems like databases, APIs, flat files, and cloud storage. Monitoring and tuning Snowflake performance, you will manage caching, clustering, and partitioning to enhance efficiency while analyzing and resolving query performance bottlenecks. You will work closely with data analysts, data engineers, and business users to understand reporting and analytic needs, ensuring seamless integration with BI Tools like Power BI. Your role will also involve collaborating with the DevOps team for automation, deployment, and monitoring, as well as planning and executing strategies for scaling Snowflake environments as data volume grows. Keeping up to date with emerging trends and technologies in data warehousing and data management is essential, along with providing technical support, troubleshooting, and guidance to users accessing the data warehouse. To be successful in this role, you must have 8 to 10 years of experience in data management tools like Snowflake, Streamsets, and Informatica. Experience with monitoring tools like Dynatrace and Splunk, Kubernetes cluster management, and Linux OS is required. Additionally, familiarity with containerization technologies, cloud services, CI/CD pipelines, and banking or financial services experience would be advantageous. Thank you for considering employment with Fiserv. To apply, please use your legal name, complete the step-by-step profile, and attach your resume. Fiserv is committed to diversity and inclusion and does not accept resume submissions from agencies outside of existing agreements. Beware of fraudulent job postings not affiliated with Fiserv to protect your personal information and financial security.,
Posted 2 months ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Engineering Good to have skills : Cloud InfrastructureMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a StreamSets SME with deep expertise in IBM StreamSets and working knowledge of cloud infrastructure to support ongoing client engagements. The ideal candidate should have a strong data engineering background with hands-on experience in metadata handling, pipeline management, and discovery dashboard interpretation.As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with organizational objectives, ensuring that all stakeholders are informed and involved in key decisions throughout the project lifecycle.Key Responsibilities:Lead and manage the configuration, monitoring, and optimization of StreamSets pipelines.Work closely with client teams and internal stakeholders to interpret metadata extracts and validate pipeline inputs from StreamSets and Pulse Discovery Dashboards.Evaluate metadata samples provided by the client and identify gaps or additional requirements for full extraction.Coordinate with SMEs and client contacts to validate technical inputs and assessments.Support the preparation, analysis, and optimization of full metadata extracts for ongoing project phases.Collaborate with cloud infrastructure teams to ensure seamless deployment and monitoring of StreamSets on cloud platforms.Provide SME-level inputs and guidance during design sessions, catch-ups, and technical reviews.Ensure timely support during critical assessments and project milestones. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing and mentoring within the team to enhance overall performance.- Monitor project progress and implement necessary adjustments to meet deadlines and quality standards.Preferred Qualifications:Proven experience in StreamSets (IBM preferred) pipeline development and administration.Familiarity with Discovery Dashboard and metadata structures.Exposure to cloud platforms such as AWS, including infrastructure setup and integration with data pipelines.Strong communication skills for client interaction and stakeholder engagement.Ability to work independently in a fast-paced, client-facing environment. Professional & Technical Skills: - Must To Have Skills: Proficiency SME in IBM StreamSets- Good To Have Skills: Experience with Cloud Infrastructure. Proficiency in Data Engineering.- Strong understanding of data modeling and ETL processes.- Experience with big data technologies such as Hadoop or Spark.- Familiarity with database management systems, including SQL and NoSQL databases. Additional Information:- The candidate should have minimum 7.5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 months ago
7.0 - 12.0 years
11 - 15 Lacs
Gurugram
Work from Office
Project description We are looking for an experienced Data Engineer to contribute to the design, development, and maintenance of our database systems. This role will work closely with our software development and IT teams to ensure the effective implementation and management of database solutions that align with client's business objectives. Responsibilities The successful candidate would be responsible for managing technology in projects and providing technical guidance/solutions for work completion (1.) To be responsible for providing technical guidance/solutions (2.) To ensure process compliance in the assigned module and participate in technical discussions/reviews (3.) To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations (4.) Being self-organized, focused on develop on time and quality software Skills Must have At least 7 years of experience in development in Data specific projects. Must have working knowledge of streaming data Kafka Framework (kSQL/Mirror Maker etc) Strong programming skills in at least one of these programming language Groovy/Java Good knowledge of Data Structure, ETL Design, and storage. Must have worked in streaming data environments and pipelines Experience working in near real-time/Streaming Data pipeline development using Apache Spark/Streamsets/ Apache NIFI or similar frameworks Nice to have N/A
Posted 2 months ago
3.0 - 7.0 years
5 - 9 Lacs
Pune, Bengaluru
Work from Office
Job Title - Streamsets ETL Developer, Associate Location - Pune, India Role Description Currently DWS sources technology infrastructure, corporate functions systems [Finance, Risk, HR, Legal, Compliance, AFC, Audit, Corporate Services etc] and other key services from DB. Project Proteus aims to strategically transform DWS to an Asset Management standalone operating platform; an ambitious and ground-breaking project that delivers separated DWS infrastructure and Corporate Functions in the cloud with essential new capabilities, further enhancing DWS highly competitive and agile Asset Management capability. This role offers a unique opportunity to be part of a high performing team implementing a strategic future state technology landscape for all DWS Corporate Functions globally. We are seeking a highly skilled and motivated ETL developer (individual contributor) to join our integration team. The ETL developer will be responsible for developing, testing and maintaining robust and scalable ETL processes to support our data integration initiatives. This role requires a strong understanding of database, Unix and ETL concepts, excellent SQL skills and experience with ETL tools and databases. Your key responsibilities This role will be primarily responsible for creating good quality software using the standard coding practices. Will get involved with hands-on code development. Thorough testing of developed ETL solutions/pipelines. Do code review of other team members. Take E2E Accountability and ownership of work/projects and work with the right and robust engineering practices. Converting business requirements into technical design Delivery, Deployment, Review, Business interaction and Maintaining environments. Additionally, the role will include other responsibilities, such as: Collaborating across teams Ability to share information, transfer knowledge and expertise to team members Work closely with Stakeholders and other teams like Functional Analysis and Quality Assurance teams. Work with BA and QA to troubleshoot and resolve the reported bugs / issues on applications. Your skills and experience Bachelors Degree from an accredited college or university with a concentration in Science or an IT-related discipline (or equivalent) Hands-on experience with StreamSets, SQL Server and Unix. Experience of developing and optimizing ETL Pipelines for data ingestion, manipulation and integration. Strong proficiency in SQL, including complex queries, stored procedures, functions. Solid understanding of relational database concepts. Familiarity with data modeling concepts (Conceptual, Logical, Physical) Familiarity with HDFS, Kafka, Microservices, Splunk. Familiarity with cloud-based platforms (e.g. GCP, AWS) Experience with scripting languages (e.g. Bash, Groovy). Excellent knowledge of SQL. Experience of delivering within an agile delivery framework Experience with distributed version control tool (Git, Github, BitBucket). Experience within Jenkins or pipelines based modern CI/CD systems
Posted 2 months ago
10.0 - 14.0 years
30 - 35 Lacs
Pune
Work from Office
Job Title - Streamsets ETL Developer, AVP Location - Pune, India Role Description Currently DWS sources technology infrastructure, corporate functions systems [Finance, Risk, HR, Legal, Compliance, AFC, Audit, Corporate Services etc] and other key services from DB. Project Proteus aims to strategically transform DWS to an Asset Management standalone operating platform; an ambitious and ground-breaking project that delivers separated DWS infrastructure and Corporate Functions in the cloud with essential new capabilities, further enhancing DWS highly competitive and agile Asset Management capability. This role offers a unique opportunity to be part of a high performing team implementing a strategic future state technology landscape for all DWS Corporate Functions globally. We are seeking a highly skilled and motivated ETL developer (individual contributor) to join our integration team. The ETL developer will be responsible for developing, testing and maintaining robust and scalable ETL processes to support our data integration initiatives. This role requires a strong understanding of database, Unix and ETL concepts, excellent SQL skills and experience with ETL tools and databases. Your key responsibilities Work with our clients to deliver value through the delivery of high quality software within an agile development lifecycle. Development and thorough testing of developed ETL solutions/pipelines. Define and evolve the architecture of the components you are working on and contribute to architectural decisions at a department and bank-wide level Take E2E Accountability and ownership of work/projects and work with the right and robust engineering practices. Converting business requirements into technical design, perform code review of other team members. Additionally, the role will include other responsibilities, such as: Leading and collaborating across teams. Team management, stakeholder reporting Mentor / coach junior team members in both technical and functional front Bring deep industry knowledge and best practices into the Team. Work closely with Stakeholders and other teams like Functional Analysis and Quality Assurance teams. Work with BA and QA to troubleshoot and resolve the reported bugs / issues in applications. Your skills and experience Bachelors Degree from an accredited college or university with a concentration in Science or an IT-related discipline (or equivalent) 10 - 14 years of Hands-on experience with Oracle / SQL Server and Unix. Experience of developing and optimizing ETL Pipelines for data ingestion, manipulation and integration. Strong proficiency in working with complex queries, stored procedures, functions. Solid understanding of relational database concepts. Familiarity with data modeling concepts (Conceptual, Logical, Physical) Familiarity with HDFS, Kafka, Microservices, Splunk. Familiarity with cloud-based platforms (e.g. GCP, AWS) Experience with scripting languages (e.g. Bash, Groovy). Experience of delivering within an agile delivery framework Experience with distributed version control tool (Git, Github, BitBucket). Experience within Jenkins or pipelines based modern CI/CD systems Nice to have Hands-on experience with StreamSets. Exposure to Python and GCP cloud
Posted 2 months ago
4.0 - 8.0 years
7 - 12 Lacs
Pune, Bengaluru
Work from Office
Job Title - Streamsets ETL Developer, Associate Location - Pune, India Role Description Currently DWS sources technology infrastructure, corporate functions systems [Finance, Risk, HR, Legal, Compliance, AFC, Audit, Corporate Services etc] and other key services from DB. Project Proteus aims to strategically transform DWS to an Asset Management standalone operating platform; an ambitious and ground-breaking project that delivers separated DWS infrastructure and Corporate Functions in the cloud with essential new capabilities, further enhancing DWS highly competitive and agile Asset Management capability. This role offers a unique opportunity to be part of a high performing team implementing a strategic future state technology landscape for all DWS Corporate Functions globally. We are seeking a highly skilled and motivated ETL developer (individual contributor) to join our integration team. The ETL developer will be responsible for developing, testing and maintaining robust and scalable ETL processes to support our data integration initiatives. This role requires a strong understanding of database, Unix and ETL concepts, excellent SQL skills and experience with ETL tools and databases. Your key responsibilities This role will be primarily responsible for creating good quality software using the standard coding practices. Will get involved with hands-on code development. Thorough testing of developed ETL solutions/pipelines. Do code review of other team members. Take E2E Accountability and ownership of work/projects and work with the right and robust engineering practices. Converting business requirements into technical design Delivery, Deployment, Review, Business interaction and Maintaining environments. Additionally, the role will include other responsibilities, such as: Collaborating across teams Ability to share information, transfer knowledge and expertise to team members Work closely with Stakeholders and other teams like Functional Analysis and Quality Assurance teams. Work with BA and QA to troubleshoot and resolve the reported bugs / issues on applications. Your skills and experience Bachelors Degree from an accredited college or university with a concentration in Science or an IT-related discipline (or equivalent) Hands-on experience with StreamSets, SQL Server and Unix. Experience of developing and optimizing ETL Pipelines for data ingestion, manipulation and integration. Strong proficiency in SQL, including complex queries, stored procedures, functions. Solid understanding of relational database concepts. Familiarity with data modeling concepts (Conceptual, Logical, Physical) Familiarity with HDFS, Kafka, Microservices, Splunk. Familiarity with cloud-based platforms (e.g. GCP, AWS) Experience with scripting languages (e.g. Bash, Groovy). Excellent knowledge of SQL. Experience of delivering within an agile delivery framework Experience with distributed version control tool (Git, Github, BitBucket). Experience within Jenkins or pipelines based modern CI/CD systems
Posted 2 months ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Information Systems Responsibilities Key Responsibilities:- Design, develop, and deploy MSBI solutions using SQL Server, Power BI, SSIS, and SSAS- Create and optimize stored procedures, DAX queries, and data models to drive business insights- Collaborate with stakeholders to understand business requirements and develop data visualizations and reports- Develop and maintain data warehouses, data marts, and ETL processes using SSIS and SSAS- Ensure data quality, integrity, and security by implementing data validation and error handling mechanisms- Communicate complex technical concepts to non-technical stakeholders through effective communication skills. Additional Responsibilities: Minimum Qualifications:- B.E/B.Tech/M.E/M.Tech/MCA degree in Computer Science or related field- Minimum 3 years of experience in SQL Server and Power BI- Proven experience in developing stored procedures, DAX queries, SSIS, and SSAS- Excellent communication skills to work with cross-functional teams Technical and Professional : Preferred Qualifications:- Experience with Microsoft Fabric, Streamsets, and Kafka is a plus- Proficiency in Python programming language with a minimum of 1 year of experience- Knowledge of Generative AI tools like ChatGPT, Gemini, and basic prompting techniques is desirable- Experience with data engineering, data science, or business intelligence is an asset- Strong understanding of data modeling, data warehousing, and ETL processes- Ability to work in an agile environment and adapt to changing priorities and requirements- Strong problem-solving skills, attention to detail, and ability to deliver high-quality results under tight deadlines Preferred Skills: Technology-Business Intelligence - Analytics-MSBI (SSAS) Technology-Business Intelligence - Reporting-MSBI (SSRS) Technology-Business Intelligence - Integration-MSBI (SSIS) Technology-SQL SSIS-SQL Server
Posted 2 months ago
10.0 - 15.0 years
12 - 16 Lacs
Gurugram
Work from Office
1. Experience of working in AWS cloud service (i.e. S3, AWS Glue, Glue catalogue, Step Functions, lambda, Event Bridge etc) 2. Must have hands on DQ Libraries for data quality checks 3. Proficiency in data modelling and database management. 4. Strong programming skills in python, unix and on ETL technologies like Informatica 5. Experience of DevOps and Agile methodology and associated toolsets (including working with code repositories) and methodologies 6. Knowledge of big data technologies like Hadoop and Spark. 7. Must have hands on Reporting tools Tableau, quick sight and MS Power BI 8. Must have hands on databases like Postgres, MongoDB 9. Experience of using industry recognised frameworks and experience on Streamsets & Kafka is preferred 10. Experience in data sourcing including real time data integration 11. Proficiency in Snowflake Cloud and associated data migration from on premise to Cloud with knowledge on databases like Snowflake, Azure data lake, Postgres
Posted 2 months ago
7.0 - 12.0 years
11 - 15 Lacs
Gurugram
Work from Office
Project description We are looking for an experienced Data Engineer to contribute to the design, development, and maintenance of our database systems. This role will work closely with our software development and IT teams to ensure the effective implementation and management of database solutions that align with client's business objectives. Responsibilities The successful candidate would be responsible for managing technology in projects and providing technical guidance/solutions for work completion: (1.) To be responsible for providing technical guidance/solutions (2.) To ensure process compliance in the assigned module and participate in technical discussions/reviews (3.) To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations (4.) Being self-organized, focused on develop on time and quality software Skills Must have At least 7 years of experience in development in Data specific projects. Must have working knowledge of streaming data Kafka Framework (kSQL/Mirror Maker etc) Strong programming skills in at least one of these programming language Groovy/Java Good knowledge of Data Structure, ETL Design, and storage. Must have worked in streaming data environments and pipelines Experience working in near real-time/Streaming Data pipeline development using Apache Spark/Streamsets/ Apache NIFI or similar frameworks Nice to have N/A Other Languages EnglishB2 Upper Intermediate Seniority Senior
Posted 2 months ago
8.0 - 12.0 years
8 - 12 Lacs
Pune, Maharashtra, India
On-site
In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What you will do: Define and implement best practices for data modelling, schema design, query optimization in Snowflake Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflake from various resources Integrate data from diverse systems like databases, APIs, flat files, cloud storage etc. into Snowflake Use tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management Manage Snowflake caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaborate with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for automation, deployment and monitoring Plan and execute strategies for scaling Snowflake environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflake with BI Tools like Power BI and create dashboards Support ad hoc query requests while maintaining system performance Create and maintain documentation related to data warehouse architecture, data flow, and processes Provide technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflake queries and manage performance Keep up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflake-based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflake implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps Act as point of contact for Snowflake-related queries, issues and initiatives What you will need to have: Must have 8 to 10 years of experience in data management tools like Snowflake, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identify issues Demonstrate the ability to present information effectively in communications with peers and project management team Highly organized and works well in a fast paced, fluid and dynamic environment What would be great to have: Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development life cycle best practices
Posted 2 months ago
11.0 - 16.0 years
11 - 16 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
We are looking for a highly experienced and strategic Data Engineer to drive the design, development, and optimization of our enterprise data platform. This role requires deep technical expertise in AWS, StreamSets, and Snowflake, along with solid experience in Kubernetes, Apache Airflow, and unit testing. The ideal candidate will lead a team of data engineers and play a key role in delivering scalable, secure, and high-performance data solutions for both historical and incremental data loads. Key Responsibilities: Lead the architecture, design, and implementation of end-to-end data pipelines using StreamSets and Snowflake. Oversee the development of scalable ETL/ELT processes for historical data migration and incremental data ingestion. Guide the team in leveraging AWS services (S3, Lambda, Glue, IAM, etc.) to build cloud-native data solutions. Provide technical leadership in deploying and managing containerized applications using Kubernetes. Define and implement workflow orchestration strategies using Apache Airflow. Establish best practices for unit testing, code quality, and data validation. Collaborate with data architects, analysts, and business stakeholders to align data solutions with business goals. Mentor junior engineers and foster a culture of continuous improvement and innovation. Monitor and optimize data workflows for performance, scalability, and cost-efficiency. Required Skills & Qualifications: High proficiency in AWS, including hands-on experience with core services (S3, Lambda, Glue, IAM, CloudWatch). Expert-level experience with StreamSets, including Data Collector, Transformer, and Control Hub. Strong Snowflake expertise, including data modeling, SnowSQL, and performance tuning. Medium-level experience with Kubernetes, including container orchestration and deployment. Working knowledge of Apache Airflow for workflow scheduling and monitoring. Experience with unit testing frameworks and practices in data engineering. Proven experience in building and managing ETL pipelines for both batch and real-time data. Strong command of SQL and scripting languages such as Python or Shell. Experience with CI/CD pipelines and version control tools (e.g., Git, Jenkins). Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics, Solutions Architect). Experience with data governance, security, and compliance frameworks. Familiarity with Agile methodologies and tools like Jira and Confluence. Prior experience in a leadership or mentoring role within a data engineering team. Our Commitment to Diversity & Inclusion: Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental Leaveo Learning & Career Development Employee Wellness Job Location : Bengaluru, India
Posted 3 months ago
3.0 - 7.0 years
3 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
We are looking for a highly experienced and strategic Senior Data Engineer to drive the design, development, and optimization of our enterprise data platform. This role requires deep technical expertise in AWS, StreamSets, and Snowflake, along with solid experience in Kubernetes, Apache Airflow, and unit testing. The ideal candidate will lead a team of data engineers and play a key role in delivering scalable, secure, and high-performance data solutions for both historical and incremental data loads. Key Responsibilities: Lead the architecture, design, and implementation of end-to-end data pipelines using StreamSets and Snowflake. Oversee the development of scalable ETL/ELT processes for historical data migration and incremental data ingestion. Guide the team in leveraging AWS services (S3, Lambda, Glue, IAM, etc) to build cloud-native data solutions. Provide technical leadership in deploying and managing containerized applications using Kubernetes. Define and implement workflow orchestration strategies using Apache Airflow. Establish best practices for unit testing, code quality, and data validation. Collaborate with data architects, analysts, and business stakeholders to align data solutions with business goals. Mentor junior engineers and foster a culture of continuous improvement and innovation. Monitor and optimize data workflows for performance, scalability, and cost-efficiency. Required Skills & Qualifications: High proficiency in AWS, including hands-on experience with core services (S3, Lambda, Glue, IAM, CloudWatch). Expert-level experience with StreamSets, including Data Collector, Transformer, and Control Hub. Strong Snowflake expertise, including data modeling, SnowSQL, and performance tuning. Medium-level experience with Kubernetes, including container orchestration and deployment. Working knowledge of Apache Airflow for workflow scheduling and monitoring. Experience with unit testing frameworks and practices in data engineering. Proven experience in building and managing ETL pipelines for both batch and real-time data. Strong command of SQL and scripting languages such as Python or Shell. Experience with CI/CD pipelines and version control tools (eg, Git, Jenkins). Preferred Qualifications: AWS certification (eg, AWS Certified Data Analytics, Solutions Architect). Experience with data governance, security, and compliance frameworks. Familiarity with Agile methodologies and tools like Jira and Confluence. Prior experience in a leadership or mentoring role within a data engineering team. Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and we'll-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental Leaveo Learning & Career Development Employee we'llness
Posted 3 months ago
3.0 - 5.0 years
5 - 8 Lacs
Pune
Work from Office
Role Purpose Consultants are expected to complete specific tasks as part of a consulting project with minimal supervision. They will start to build a core areas of expertise and will contribute to client projects typically involving in-depth analysis, research, supporting solution development and being a successful communicator. The Consultant must achieve high personal billability. Do Consulting Execution An ambassador for the Wipro tenets and values Work stream leader or equivalent and coordinates small teams Receives great feedback from the client Client focused and tenacious in approach to solving client issues and achieving client objectives Organises work competently and ensures timeliness and quality of deliverables Has well grounded understanding of best practice in given area and industry knowledge, and can apply this under supervision Develops strong working relationships with team and client staff Business development Ensures high levels of individual utilisation achievement in line with the levels expected as part of the goal setting process Sells self by creating extensions to current assignments and demand on new assignments based on track record and reputation Understands Wipro's core service and consulting offering Builds relationships with client peers and provides required intelligence/insights to solve clients business problems Identifies sales leads and extension opportunities Anchors market research activities in chosen area of work Thought Leadership Develops insight into chosen industry and technology trends Contributes to team thought leadership Ensures a track record is written up of own assignment and, where appropriate, ensures it is written up as a case study Contribution to Practice/Wipro Continually delivers all Wipro admin in a timely manner (timesheets, appraisals expenses, etc.,) Demonstrates contribution to internal initiatives Contributes to the IP and knowledge management of Wipro and GCG and ensures its availability on the central knowledge management repository or Wipro and GCG Leverages tools, methods, assets, information sources, and IP available within the knowledge management platform Engages with other consulting and delivery teams to enhance collaboration and growth and is part of the Wipro 'Communities' activities Proactively participates in initiatives and suggests ideas for practice development Makes use of common methods and tools which are proven to work Develops process assets and other reusable artefacts based on learnings from projects Proactively participates in and suggests ideas for practice development initiatives Shares knowledge within the team and networks effectively with SMEs to bolster understanding and build skills Deliver Strategic Objectives Parameter Description Measure (Select relevant measures/ modify measures after speaking to your Manager) Deliver growth in consulting revenues Support business performance for direct consulting against relevant quarterly/annual targets Improve quality of consulting by flawless delivery of transformation engagements % of Personal Utilisation Achievement (against target) No. of RFI/RFPs responses supported No. of transformation engagements delivered No. of referenceable clients, testimonials Average CSAT, PCSAT across projects Generate Impact Enable pull through business/ impact for Wipro through front end consulting engagements/deal pursuit/ client relationships Number and value of downstream opportunities identified for GCG and larger Wipro Grow market positioning Lead/actively contribute to the development of thought leadership/offerings/assets for the practice to support business growth Eminence and thought leadership demonstrated through content, citations and testimonials Contributions to white papers/POVs/ assets such as Repeatable IP, Frameworks & Methods Number of ideas generated and active contribution to the development of new consulting offerings/solutions/assets Provide consulting leadership to accounts Support GCG Account Lead/Account team to grow consulting service portfolio Number & $ value of consulting deals in the account supported Grow the consulting talent Grow skills and capabilities to deliver consulting engagements in new industries, business themes, frameworks, technologies Self Development - Min 32 hrs on training in a year. Combination of online and classroom on new industries, new business themes, new technologies, new frameworks, etc. Build the consulting community Individual contribution to People Development and Collaboration Effectiveness Distinct participation in and demonstration of: Collaboration across GCG - through the contribution to cross-practice offerings, sharing of best practices/industrial/ technological expertise, consulting community initiatives Knowledge Management - Number of Assets owned and contributed to Consulting Central Mandatory Skills: StreamSets. .
Posted 3 months ago
5.0 - 9.0 years
15 - 20 Lacs
Hyderabad
Hybrid
About Us: Our global community of colleagues bring a diverse range of experiences and perspectives to our work. You'll find us working from a corporate office or plugging in from a home desk, listening to our customers and collaborating on solutions. Our products and solutions are vital to businesses of every size, scope and industry. And at the heart of our work youll find our core values: to be data inspired, relentlessly curious and inherently generous. Our values are the constant touchstone of our community; they guide our behavior and anchor our decisions. Designation: Software Engineer II Location: Hyderabad KEY RESPONSIBILITIES Design, build, and deploy new data pipelines within our Big Data Eco-Systems using Streamsets/Talend/Informatica BDM etc. Document new/existing pipelines, Datasets. Design ETL/ELT data pipelines using StreamSets, Informatica or any other ETL processing engine. Familiarity with Data Pipelines, Data Lakes and modern Data Warehousing practices (virtual data warehouse, push down analytics etc.) Expert level programming skills on Python Expert level programming skills on Spark Cloud Based Infrastructure: GCP Experience with one of the ETL Informatica, StreamSets in creation of complex parallel loads, Cluster Batch Execution and dependency creation using Jobs/Topologies/Workflows etc., Experience in SQL and conversion of SQL stored procedures into Informatica/StreamSets, Strong exposure working with web service origins/targets/processors/executors, XML/JSON Sources and Restful APIs. Strong exposure working with relation databases DB2, Oracle & SQL Server including complex SQL constructs and DDL generation. Exposure to Apache Airflow for scheduling jobs Strong knowledge of Big data Architecture (HDFS), Cluster installation, configuration, monitoring, cluster security, cluster resources management, maintenance, and performance tuning Create POCs to enable new workloads and technical capabilities on the Platform. Work with the platform and infrastructure engineers to implement these capabilities in production. Manage workloads and enable workload optimization including managing resource allocation and scheduling across multiple tenants to fulfill SLAs. Participate in planning activities, Data Science and perform activities to increase platform skills KEY Requirements Minimum 6 years of experience in ETL/ELT Technologies, preferably StreamSets/Informatica/Talend etc., Minimum of 6 years hands-on experience with Big Data technologies e.g. Hadoop, Spark, Hive. Minimum 3+ years of experience on Spark Minimum 3 years of experience in Cloud environments, preferably GCP Minimum of 2 years working in a Big Data service delivery (or equivalent) roles focusing on the following disciplines: Any experience with NoSQL and Graph databases Informatica or StreamSets Data integration (ETL/ELT) Exposure to role and attribute based access controls Hands on experience with managing solutions deployed in the Cloud, preferably on GCP Experience working in a Global company, working in a DevOps model is a plus Dun & Bradstreet is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, creed, sex, age, national origin, citizenship status, disability status, sexual orientation, gender identity or expression, pregnancy, genetic information, protected military and veteran status, ancestry, marital status, medical condition (cancer and genetic characteristics) or any other characteristic protected by law. We are committed to Equal Employment Opportunity and providing reasonable accommodations to qualified candidates and employees. If you are interested in applying for employment with Dun & Bradstreet and need special assistance or an accommodation to use our website or to apply for a position, please send an e-mail with your requesttoacquisitiont@dnb.com Determinationon requests for reasonable accommodation are made on a case-by-case basis.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |