Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Summary... Drives the execution of multiple business plans and projects by identifying customer and operational needs; developing and communicating business plans and priorities; removing barriers and obstacles that impact performance; providing resources; identifying performance standards; measuring progress and adjusting performance accordingly; developing contingency plans; and demonstrating adaptability and supporting continuous learning. Provides supervision and development opportunities for associates by selecting and training; mentoring; assigning duties; building a team-based work environment; establishing performance expectations and conducting regular performance evaluations; providing recognition and rewards; coaching for success and improvement; and ensuring Belonging awareness. Promotes and supports company policies, procedures, mission, values, and standards of ethics and integrity by training and providing direction to others in their use and application; ensuring compliance with them; and utilizing and supporting the Open Door Policy. Ensures business needs are being met by evaluating the ongoing effectiveness of current plans, programs, and initiatives; consulting with business partners, managers, co-workers, or other key stakeholders; soliciting, evaluating, and applying suggestions for improving efficiency and cost-effectiveness; and participating in and supporting community outreach events. What you'll do... About the Team : Centroid team at Walmart serves as the backbone of Walmart's end-to-end supply chain strategy. They are entrusted with the task of designing and implementing a long-term supply chain strategy that uses advanced data analytics and data science. Their primary objective is to ensure that Walmart provides top-tier customer service while supporting the increasing demand over time and simultaneously operating at low and efficient costs. The team utilizes sophisticated data analysis methods to understand patterns, identify potential bottlenecks, and predict future trends. This enables them to optimize processes, make informed business decisions, and enhance overall operational efficiency. One of Centroid's key responsibilities also includes the creation of a Digital Twin Simulation platform for Walmart's supply chain. This innovative tool allows the team to test and validate all future strategies and tactical decisions before they are launched operationally. It also enables a deep assessment of long-term strategic sensitivity. In essence, the Centroid team's work is integral to ensuring Walmart's supply chain is robust, flexible, and capable of adapting to ever-changing market demands. Their work helps to keep Walmart at the forefront of retail supply chain management, delivering exceptional service to customers while maintaining efficient operational costs. What You'll do : Develop and manage advanced data analytics models to optimize supply chain strategies, balancing customer satisfaction with operational cost and asset efficiency. Leverage data analytics to identify opportunities for improvement and drive impactful results through collaboration with cross-functional teams. Establish relationships across Walmart functional areas to identify best practices, solicit data/input, coordinate interdisciplinary initiatives, and rally support for data-driven recommendations. Secure alignment and support from relevant business partners and management for data-centric projects, leading discussions to drive necessary change. Utilize all available data resources effectively to ensure successful project outcomes. Communicate data insights clearly and persuasively through emails, verbal discussions, and presentations, tailoring communication methods to the audience for maximum impact. Collaborate with multiple supply chain business teams to proactively identify, assess, and leverage cost-saving and service improvement opportunities through advanced data analytics. Utilize advanced analytics models to derive insights that will inform policy design across various supply chain areas, laying out multiple scenarios and performing sensitivity analysis. Collaborate with Data Scientists and Engineers to productionize and scale advanced analytics models as needed. Develop and present compelling data-driven narratives/documents/visuals to influence key stakeholders in their decision-making. Provide coaching and training support to other team members in the supply chain area, leveraging your expertise in advanced data analytics. What You'll bring : Strong analytical acumen with technical expertise in Advanced Data Analytics and modelling Expert in SQL, - BigQuery like cloud data platforms. Expert in programming in Python, (or R) Experience in using data visualization tools like Tableau and Looker and be able to drive powerful insights. Experience working with large data sets and distributed computing tools (Map/Reduce, Hadoop, Hive, and/or Spark) Experience in operating from a cloud environment such as Google Could Platform or Microsoft Azure. Ability to work in a fast-paced, iterative development environment. Strong communication skills, both written and verbal, plus ability to work with cross functional teams of technical and non-technical members. Strong ability to understand the business and have good stakeholder management capabilities. Experience of working in cross-functional environment and leading or mentoring teams. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer – By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelors degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 4 years' experience in an analytics related field. Option 2: Masters degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 2 years' experience in an analytics related field. Option 3: 6 years' experience in an analytics or related field. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Primary Location... G, 1, 3, 4, 5 Floor, Building 11, Sez, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli , India R-2182244 Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a hands-on Data Engineer who is passionate about solving business problems through innovation and engineering practices. As a Data Engineer, the candidate will leverage deep technical knowledge and will apply knowledge of data architecture standards, data warehousing, data structures, and business intelligence to drive the creation of high-quality data products for data driven decision making. Required Qualifications 3-6 Years Experience of implementing data-intensive solutions using agile methodologies. Code contributing member of Agile teams, working to deliver sprint goals. Write clean, efficient, and maintainable code that meets the highest standards of quality. Very strong in coding Python/Pyspark, UNIX shell scripting Experience in cloud native technologies and patterns Ability to automate and streamline the build, test and deployment of data pipelines T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in data integration platforms such as Apache Spark Experienced in writing Pyspark code to handle large data set ,perform data transformation , familiarity with Pyspark integration with other Apache Spark component ,such as Spark SQL , Understanding of Pyspark optimization techniques Strong proficiency in working with relational databases and using SQL for data querying, transformation, and manipulation. Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Iceberg for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, PySpark, UNIX Shell scripting DevOps: Exposure to concepts and enablers - CI/CD platforms, bitbucket/Github, JIRA, Jenkins, Tekton, Harness Technical Skills (Valuable) Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls, framework libraries like Deequ Federated Query: Starburst, Trino Containerization: Fair understanding of containerization platforms like Docker, Kubernetes, Openshift File Formats: Exposure in working on File/Table Formats such as Avro, Parquet, Iceberg, Delta Schedulers: Basics of Job scheduler like Autosys, Airflow Cloud: Experience in cloud native technologies and patterns (AWS, Google Cloud) Nice to have: Java, for REST API development Other skills : Strong project management and organizational skills. Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team. Experience in managing and implementing successful projects Ability to adjust priorities quickly as circumstances dictate Consistently demonstrates clear and concise written and verbal communication ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
1.0 - 2.6 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Analyst – ETL Testing - Deloitte Support Services India Private Limited USI DT Canada MF is an integral part of the Information Technology Services group. The principle focus of this organization is the development and maintenance of technology solutions that e-enable the delivery of Function and Marketplace Services and Management Information Systems. Solutions Delivery group develops and maintains solutions built on varied technologies like Siebel, PeopleSoft Microsoft technologies, SAP, Hadoop, ETL, BI and Lotus Notes. Solutions Delivery Canada has various groups which provide the best of the breed solutions to the clients by following a streamlined system development methodology. Solutions Delivery comprises of groups like Usability, Application Architecture, Development and Quality Assurance and Performance. Role Specific Responsibilities / Work You’ll Do Responsible for planning, developing, and coordinating testing activities including Test Plan creation, Test Case creation, debugging, execution, test analysis. Responsible for the execution of test scenarios in support of the test team. Familiarize themselves with the business functionality and technology used for assigned applications (under test). Utilize ETL QA testing tools, methodologies, and processes. Work closely with on-site team towards successful test phases. Encourage collaborative efforts and camaraderie with other Release Stream team areas. Ensure the quality and low bug rates of code released into production. Responsible for successful execution and alerting team leads and managers of obstacles, issues, and risks. Able discuss status of all open issues facing the test team and describes actions taken to mitigate such issues Responsible for coordinating/engaging build movements to the QA environment as directed by Team Lead. The team EDC Canada is the Canada CIO’s IT department which manages an end-to-end portfolio of Canada business applications and technology infrastructure that supports business processes common to Deloitte Canada member firm. Cutting Edge Technologies: At USI DT Canada MF, you will be part of an exciting journey that will keep you ahead of the curve. Be it our innovative delivery model for agile or our Communities of Practices, we are constantly investing in leading edge technologies to give our practitioners a world class experience. We have programs and projects spanning across a multitude of technologies and always abreast on evolving technologies and emerging industry leading practices such as agile. Application Development and Solutions Delivery: Start from Architecture and User Experience and evolve into design, develop, transform, re-platform, or custom-build systems in complex business scenarios. We manage a portfolio of enterprise scale applications and solutions used by practitioners in Canada. Offerings include Custom Development, Packaged Application Development, Application Architecture and Testing Advisory Services. Technologies include Business Analytics, Business intelligence, Cloud Development, Mobile, .Net, SharePoint, SAP HANA, Manual, Automated, and Performance testing. Location : Hyderabad Work shift Timings : 11 AM to 8 PM Qualifications Essential A Computer Science University degree and/or equivalent work experience A strong commitment to professional client service excellence Excellent interpersonal relations and demonstrated ability to work with others effectively in teams Good verbal and written communications skills Excellent Analytical Skill Top 3 Keywords: SQL databases, Test Strategy, API’s testing Technical Skills And Qualifications 1-2.6 years' experience in ETL testing. Demonstrates an understanding and working experience on SQL databases and ETL testing. Should be able to write queries to validate the table mappings and structures Should be able to perform schema validations Good understanding of SCD types Strong knowledge of database methodology In-depth understanding of Data Warehousing/Business intelligence concepts Working experience in testing BI reports Should be able to write queries to validate the data quality during migration projects Demonstrates an understanding of any of the peripheral technologies utilized in SDC, including Peoplesoft, SAP and Aderant. Demonstrates a working understanding of tools like UFT and TFS Experience with Microsoft tools is highly desired Understands enterprise-wide networks and software implementations. Must have previous experience in creating complex SQL queries for data validation. Must have testing experience in Enterprise Data Warehouse (EDW) Good to have Reports testing experience Good to have working knowledge on Azure, DB2, HANA, SQL databases. Demonstrates a working understanding of planning, developing, and coordinating testing activities including Test Plan creation, Test Case creation, debugging, execution, test analysis. Demonstarte an understanding on the estimation techaniques and QA plan Demonstrates analytical skills in assessing user, functional and technical requirements Demonstrates a working understanding of functional testing techniques and strategies. Demonstrates a working understanding of test analysis and design. Demonstrates a working understanding of analyzing test results and the creation of appropriate test metrics. Demonstrates a working understanding of the defect management process. Demonstrates an ability to provide accurate project estimates and timelines. Demonstrates an ability to deliver on project commitments. Produces work that consistently meets quality standards. Demonstrates the ability to operate as the primary resource on a project Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304282 Show more Show less
Posted 1 week ago
4.0 - 9.0 years
6 - 16 Lacs
Coimbatore
Work from Office
Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: Hadoop, Spark, Python, Data bricks Good to have: Java/Scala The Role: • Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. • Constructing infrastructure for efficient ETL processes from various sources and storage systems. • Leading the implementation of algorithms and prototypes to transform raw data into useful information. • Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. • Creating innovative data validation methods and data analysis tools. • Ensuring compliance with data governance and security policies. • Interpreting data trends and patterns to establish operational alerts. • Developing analytical tools, programs, and reporting mechanisms. • Conducting complex data analysis and presenting results effectively. • Preparing data for prescriptive and predictive modeling. • Continuously exploring opportunities to enhance data quality and reliability. • Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: • Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). • Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. • High proficiency in Scala/Java and Spark for applied large-scale data processing • Expertise with big data technologies, including Spark, Data Lake, and Hive. • Solid understanding of batch and streaming data processing techniques. • Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. • Expert-level ability to write complex, optimized SQL queries across extensive data volumes. • Experience on HDFS, Nifi, Kafka. • Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB • Familiarity with Agile methodologies. • Obsession for service observability, instrumentation, monitoring, and alerting. • Knowledge or experience in architectural best practices for building data lakes Interested candidates can share their resume at Neesha1@damcogroup.com
Posted 1 week ago
9.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview As a part of Global Risk Analytics, Enterprise Risk Analytics (ERA ) is responsible for the development of cross-business holistic analytical models and tools. Team responsibilities include: Financed Emissions responsible for supporting the calculation of asset level balance sheet Financed Emissions, which are integral to the Bank ’s goal of achieving Net-zero greenhouse gas emissions by 2050. Financial Crimes Modelling & Analytics responsible for enterprise-wide financial crimes and compliance surveillance model development and ongoing monitoring across all lines of business globally. Operational Risk responsible for operational risk loss forecasting and capital model development for CCAR/stress testing and regulatory capita l reporting/economic capital measurement purpose. Business Transformations is a central team of Project Managers and Quantitative S/W engineers partnering with coverage area ERA teams with the end goal of onboarding ERA production processes on GCP/production platforms as well as identify risk/gaps in ERA processes which can be fixed with well-designed and controlled S/W solutions. Trade Surveillance Analytics responsible for modelling and analytics supporting trade surveillance activities within risk. Advanced Analytics responsible for driving research, development, and implementation of new enhanced risk metrics and provide quantitative support for loss forecasting and stress testing requirements, including process improvement and automation Job Description The role will be responsible for independently conducting quantitative analytics and modeling projects Responsibilities Perform model development proof of concept, research model methodology, explore internal & external data sources, design model development data, and develop preliminary model Conduct complex data analytics on modeling data, identify, explain & address data quality issues, apply data exclusions, perform data transformation, and prepare data for model development Analyze portfolio definition, define model boundary, analyze model segmentation, develop Financed Emissions models for different asset classes, analyze and benchmark model results Work with Financed Emissions Data Team & Climate Risk Tech on the production process of model development & implementation data, including support data sourcing efforts, provide data requirements, perform data acceptance testing, etc. Work with Financed Emissions Production & Reporting Team on model implementation, model production run analysis, result analysis & visualization Work with ERA Model Implementation team & GCP Tech on model implementation, including opine on implementation design, provide implementation data model & requirements, perform model implementation result testing, etc. Work with Model Risk Management (MRM) on model reviews and obtain model approvals Work with GEG (Global Environmental Group) and FLU (Front Line Unit) on model requirements gathering & analysis, Climate Risk target setting, disclosure, analysis & reporting Requirements Education B.E. / B. Tech/M.E. /M. Tech Certifications If any : NA Experience Range : 9 to 12 years Foundational Skills* Advanced knowledge of SQL and Python Advanced Excel, VSCode, LaTex, Tableau skills Experience in multiple data environment such as Oracle, Hadoop, and Teradata Knowledge of data architecture concepts, data models, ETL processes Knowledge of climate risk, financial concepts & products Experience in extracting, and combining data across from multiple sources, and aggregate data for model development Experience in conducting quantitative analysis, performing model driven analytics, and developing models Experience in documenting business requirements for data, model, implementation, etc. Desired Skills Basics of Finance Basics of Climate Risk Work Timings 11:30 AM to 8:30 PM Job Location Hyderabad, Chennai Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Greater Ahmedabad Area
On-site
Company Profile Infocusp Innovations is an IT firm working in the broad fields of Machine Learning, Artificial Intelligence (AI), Computer Science, Software Engineering, Mobile and Web App Development, QA, and Signal Processing. Global presence in India and the United States of America, with offices in Ahmedabad, Pune and California, we make businesses smart and systems smarter to make people's lives easier. Our teams work closely with the research and product engineering teams to develop next-gen robust solutions that improve the quality of life of users and enhance the user experience. We have worked on high-impact projects in the domains of Healthcare, FinTech, IoT, Law and Humanity, AgroTech and Horticulture, Molecular Chemistry, GeoPhysics, Biology, Energy, Logistics, Recruitment and Gaming. Transparent communication, healthy and constructive discussions along with an open and welcoming work culture are what we prefer at Infocusp. We make conscious decisions to facilitate our work with favorable working conditions and amenities that enable our employees to give their best. About the Role A Senior Software Engineer is a highly proficient professional who excels in designing, developing, and maintaining complex software systems. They demonstrate expertise in multiple programming languages, possess a deep understanding of software architecture, and often lead significant projects or mentor junior engineers. This role involves making critical technical decisions, optimizing software performance, resolving intricate challenges, and contributing to innovative solutions. Senior Software Engineers play a pivotal role in shaping software strategies, driving technical excellence, and staying current with industry trends to deliver robust and advanced software applications. Responsibilities Own the design, development, evaluation, and deployment of highly scalable software products involving front-end and back-end development. Maintain quality, responsiveness, and stability of the system. Evaluate and make decisions on the use of new tools and technologies. Design and develop memory-efficient, compute-optimized solutions for the software. Delegate tasks and mentor junior engineers. Prioritize and distribute the tasks amongst the team members. Design and administer automated testing tools and continuous integration tools. Produce comprehensive and usable software documentation. Follow secure development, testing, and deployment guidelines and practices in order to adhere to the overall security of the system under consideration. Requirements B.E\B.Tech\M.E.\M.S.\M.Tech\PhD candidates' entries with significant prior experience in the fields above will be considered. 3+ years of relevant industry experience with TypeScript or Python Mastery of one or more back-end programming languages (JavaScript, Java, Scala, C++, etc.) Experience in using cloud services (AWS, GCP, Microsoft Azure) and distributed systems like (Hadoop, Spark, Beam). Deep understanding of various relational (SQL, PostgreSQL), non-relational (Mongo, DynamoDB, Cassandra) and time-series (InfluxDB) database systems. Knowledge of automated and continuous integration testing tools (Jenkins, Team City, Circle CI, etc.) Good to have experience in front-end programming paradigms and libraries (for example advanced JavaScript libraries and frameworks such as Angular and React). Ability to plan and design software system architecture. Proven experience in platform development for large-scale systems. Location: Ahmedabad/Pune Contact us to apply If you would like to apply for this role, send your resume to careers@infocusp.com . Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Be able to elicit business needs and define requirements for reporting solutions which includes driving consensus across multiple stakeholders who often have different priorities and needs as well as providing guidance for business stakeholders as needed to clarify requirements and presentation, format and layout options for the information Support an iterative report development process and serve as a liaison between the BI developers and the business Build scalable, fault-tolerant batch and real-time data ingestion pipelines, data transformation and data mining jobs Analysis and development of Cloud and Big data solutions Assemble large, complex analytics relevant quality datasets from different source systems in the organization Work with different stakeholders to build data process automations for more efficient and reliable business processes Implement software components needed for better data platform governance Manage large scaled Data Repositories, including its maintenance, scheduled backups and enhancements Implement data and platform products to enable different users to carry out their tasks with ease and clarity Work collaboratively with different users and stakeholders to ensure their success with right technical solutions Perform different data warehousing activities like transformation, enrichment, dataset metadata management for the team to effectively use the platform Present analysis and interpretation of findings for operational and business review, planning and new product development Partner with BI and Report development resources to design and prototype innovative solutions Support end-users with answering questions, developing training materials and conducting training activities as needed Document technical applications, specifications, and enhancements Recommend ways to improve data reliability, quality and efficiency Maintain production systems (Talend, Spark/Scala, Java microservices, Kafka, Hadoop, Cassandra, Elasticsearch) Develop reusable patterns and encourage innovation that will increase team velocity Anticipate issues and act proactively to address potential issues Work with sometimes ambiguous / conceptual requirements and guide the technical team to provide functionality with the right amount of engineering Lead engineers in making sound, sustainable, and practical technical decisions Foster high-performance, collaborative technical work resulting in high-quality output Collaborate on the design with other team members and product owners, both inside and outside the core team Work with geographically distributed teams, with ample opportunity to learn from and mentor teammates in a fast-paced environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 4+ years of hands-on Data Analyst 4+ years of experience with Python programing based development 4+ years of experience in relational databases (SQL Query, Stored Procedure. CTE, Triggers etc.) 4+ years of experience in Azure Cloud and Linux 2+ years of experience building and optimizing data pipelines, architectures and data sets 2+ years of Hands-on experience on Shell Scripting on Linux Experience with GitHub, Linux VM Experience with Visualization Tools like Power BI Demonstrated success in building design patterns and software engineering best practices Preferred Qualifications Microsoft Power Platform (PowerApps, Power Automate and Power BI) experience or knowledge Microsoft SharePoint Lists and Forms experience or knowledge DEVOPS experience At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
19 - 25 Lacs
Hyderabad
Work from Office
Overview Primary focus would be to perform development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities Delivery of key Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Delivery of key Azure Data Lake projects within time and budget Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications Bachelors degree in Computer Science, MIS, Business Management, or related field 5+ years experience in Information Technology 4+ years experience in Azure Data Lake Bachelors degree in Computer Science, MIS, Business Management, or related field Technical Skills Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Pyspark and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo is desirable Knowledge of FMCG business processes is desirable Non-Technical Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities Exceptional written and verbal communication skills along with collaboration and listening skills Ability to work with agile delivery methodologies Ability to ideate requirements & design iteratively with business partners without formal requirements documentation
Posted 1 week ago
4.0 - 8.0 years
15 - 30 Lacs
Noida, Hyderabad, India
Hybrid
Spark Architecture , Spark tuning, Delta tables, Madelaine architecture, data Bricks , Azure cloud services python Oops concept, Pyspark complex transformation , Read data from different file format and sources writing to delta tables Dataware housing concepts How to process large files and handle pipeline failures in current projects Roles and Responsibilities Spark Architecture , Spark tuning, Delta tables, Madelaine architecture, data Bricks , Azure cloud services python Oops concept, Pyspark complex transformation , Read data from different file format and sources writing to delta tables Dataware housing concepts How to process large files and handle pipeline failures in current projects
Posted 1 week ago
5.0 - 8.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary Engineering Tools and Services organization responsible in bringing efficiency and consistency in the way we automate, execute, triage tests and report results. We support the core ONTAP product team supporting more than 2500 engineers. About the Team Software Tools and Build Systems Engineer with developing and supporting software builds, Build operations or software tools in a UNIX environment. In this position you will work as part of the team developing and enhancing NetApp’s Best-in-Class build system, driving improvements in the development environment and improved development productivity by improving the tools, build architecture and processes. Job Responsibilities and Requirements • Build and maintain software versions regularly for multiple platforms like AWS, GCP, Azure, and IBM, ensuring timely updates and releases. • Create and use tools to automate repetitive tasks, making processes faster and more reliable. • Monitor systems to ensure they are running smoothly, identifying and fixing issues before they become problems. • Respond to and resolve technical issues quickly to minimize disruptions. • Work closely with different teams to ensure projects and product releases are completed on schedule. Technical Skills • Strong Programming skills Go/ Perl/Python. • Familiarity with OO design, Web development, and Cloud APIs • Experience in Linux Environment with containers ( Docker & Kubernetes) • Familiarity with Agile concepts , Continous Integration and Continous Delivery • Creative analytical approach to problem solving. Education • A minimum of 4 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.
Posted 1 week ago
3.0 - 8.0 years
9 - 18 Lacs
Hyderabad, Gurugram
Work from Office
3-8 Years exp. Job Location – Gurgaon and Hyderabad Work Mode – Hybrid (3-4 Days work from office) Notice Period – Immediate to 30 Days Official NP, OR 45 days serving NP candidates, ONLY.
Posted 1 week ago
3.0 - 8.0 years
35 - 50 Lacs
Bengaluru
Work from Office
About the Role: As a Data Engineer, you will be part of the Data Engineering team with this role being inherently multi-functional, and the ideal candidate will work with Data Scientist, Analysts, Application teams across the company, as well as all other Data Engineering squads at Wayfair. We are looking for someone with a love for data, understanding requirements clearly and the ability to iterate quickly. Successful candidates will have strong engineering skills and communication and a belief that data-driven processes lead to phenomenal products. What you'll do: Build and launch data pipelines, and data products focussed on SMART Org. Helping teams push the boundaries of insights, creating new product features using data, and powering machine learning models. Build cross-functional relationships to understand data needs, build key metrics and standardize their usage across the organization. Utilize current and leading edge technologies in software engineering, big data, streaming, and cloud infrastructure What You'll Need: Bachelor/Master degree in Computer Science or related technical subject area or equivalent combination of education and experience 3+ years relevant work experience in the Data Engineering field with web scale data sets. Demonstrated strength in data modeling, ETL development and data lake architecture. Data Warehousing Experience with Big Data Technologies (Hadoop, Spark, Hive, Presto, Airflow etc.). Coding proficiency in at least one modern programming language (Python, Scala, etc) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing and query performance tuning skills of large data sets. Industry experience as a Big Data Engineer and working along cross functional teams such as Software Engineering, Analytics, Data Science with a track record of manipulating, processing, and extracting value from large datasets. Strong business acumen. Experience leading large-scale data warehousing and analytics projects, including using GCP technologies Big Query, Dataproc, GCS, Cloud Composer, Dataflow or related big data technologies in other cloud platforms like AWS, Azure etc. Be a team player and introduce/follow the best practices on the data engineering space. Ability to effectively communicate (both written and verbally) technical information and the results of engineering design at all levels of the organization. Good to have : Understanding of NoSQL Database exposure and Pub-Sub architecture setup. Familiarity with Bl tools like Looker, Tableau, AtScale, PowerBI, or any similar tools. PS: This role is with one of our clients who is a leading name in Retail Industry.
Posted 1 week ago
7.0 years
0 Lacs
India
Remote
Job Title: Senior Database Administrator Experience: 7+ Years Location: Remote Contract Duration: Long Term Work Time: IST Shift Job Summary The Senior Database Administrator (DBA) is responsible for managing, maintaining, and optimizing the organization’s database systems. The role involves working on strategic initiatives, ensuring high availability, and aligning database infrastructure with long-term business goals while adhering to best practices in database administration. Responsibilities Optimize database queries for fast and efficient data retrieval, especially for complex or high-volume operations Design and implement indexing strategies to enhance query performance Monitor and analyze inefficient queries, providing recommendations for improvements Evaluate execution plans to identify performance bottlenecks Schedule and perform routine maintenance tasks such as backups and index rebuilding Implement automated monitoring systems to track database health and performance Proactively diagnose and resolve issues like locking, deadlocks, and data corruption Manage clustering, replication, and failover strategies to ensure high availability Monitor and plan for database growth and scalability Optimize resource usage including CPU, memory, disk, and network Ensure compliance with database licensing models and explore cost-saving opportunities Monitor and optimize cloud database expenses using tools like AWS Cost Explorer and Azure Cost Management Primary Skills 5 to 7 years of hands-on experience in Microsoft SQL Server administration Qualifications Bachelor’s degree in Computer Science, Software Engineering, or a related field Microsoft SQL certifications (MTA Database, MCSA: SQL Server, MCSE: Data Management and Analytics) are an advantage Secondary Skills (Preferred) Experience in MySQL, PostgreSQL, and Oracle database administration Familiarity with Data Lake, Hadoop, and Azure Exposure to DevOps or ITIL practices Behavioural Competencies Strong communication skills Effective teamwork and collaboration Digital and analytical mindset Commitment to operational excellence Customer-centric approach Business and market awareness Empathy and adaptability Growth-oriented attitude Show more Show less
Posted 1 week ago
10.0 - 15.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Req ID: 322003 We are currently seeking a Sr. ETL Developers to join our team in Bangalore, Karntaka (IN-KA), India (IN). Strong hands-on experience in SQLs, PL/SQLs [Procs, Functions]. Expert level knowledge ETL flows & Jobs [ADF pipeline exp preferred]"‚"‚"‚"‚ Experience on MS-SQL [preferred], Oracle DB, PostgreSQL, MySQL. Good knowledge of Data Warehouse/Data Mart. Good knowledge of Data Structures/Models, Integrities constraints, Performance tuning etc. Good Knowledge in Insurance Domain (preferred)"‚"‚"‚"‚"‚"‚"‚"‚"‚ Total Exp7 "“ 10 Yrs.
Posted 1 week ago
4.0 - 9.0 years
3 - 7 Lacs
Pune
Work from Office
Req ID: 324609 We are currently seeking a Data Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 1 week ago
4.0 - 9.0 years
3 - 7 Lacs
Pune
Work from Office
Req ID: 324653 We are currently seeking a Data Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 1 week ago
12.0 - 17.0 years
7 - 11 Lacs
Chennai
Work from Office
Req ID: 303369 We are currently seeking a Enterprise Resource Planning Advisor to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Has more than 12 years of relevant experience with Oracle ERP Cloud Data migration and minimum 4 end to end ERP cloud implementation. Analyze Data and Mapping Work with functional teams to understand data requirements and develop mappings to enable smooth migration using FBDI (File-Based Data Import) and ADFdi. Develop and Manage FBDI Templates Design, customize, and manage FBDI templates to facilitate data import into Oracle SaaS applications, ensuring data structure compatibility and completeness. Configure ADFdi for Data Uploads Use ADFdi (Application Development Framework Desktop Integration) for interactive data uploads, enabling users to manipulate and validate data directly within Excel. Data Validation and Quality Checks Implement data validation rules and perform pre- and post-load checks to maintain data integrity and quality, reducing errors during migration. Execute and Troubleshoot Data Loads Run data loads, monitor progress, troubleshoot errors, and optimize performance during the data migration process. Collaborate with Stakeholders Coordinate with cross-functional teams, including project managers and business analysts, to align on timelines, resolve data issues, and provide migration status updates.
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3-5 years of experience as AI/ML engineer or similar role. Strong knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with model development and deployment processes. Proficiency in programming languages such as Python. Experience with data preprocessing, feature engineering, and model evaluation techniques. Familiarity with cloud platforms (e.g., AWS) and containerization (e.g., Docker, Kubernetes). Familiarity with version control systems (e.g., GitHub). Proficiency in data manipulation and analysis using libraries such as NumPy and Pandas. Good to have knowledge of deep learning, ML Ops: Kubeflow, MLFlow, Nextflow. Knowledge on text Analytics, NLP, Gen AI Mandatory Skill Sets ML Ops, AI / ML Preferred Skill Sets ML Ops, AI / ML Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Full Stack Development Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role BlackRock Overview: BlackRock is one of the world’s preeminent asset management firms and a premier provider of global investment management, risk management and advisory services to institutional, intermediary and individual investors around the world. BlackRock offers a range of solutions — from rigorous fundamental and quantitative active management approaches aimed at maximizing outperformance to highly efficient indexing strategies designed to gain broad exposure to the world’s capital markets. Our clients can access our investment solutions through a variety of product structures, including individual and institutional separate accounts, mutual funds and other pooled investment vehicles, and the industry-leading iShares® ETFs. Aladdin Financial Engineering Group (AFE) AFE is a diverse and global team with a keen interest and expertise in all things related to technology and financial analytics. The group is responsible for the research and development of quantitative financial and behavioral models and tools across many different areas – single-security pricing, prepayment models, risk, return attribution, liquidity, optimization and portfolio construction, scenario analysis and simulations, etc. – and covering all asset classes. The group is also responsible for the technology platform that delivers those models to our internal partners and external clients, and their integration with Aladdin. AFE conducts leading research on the areas above, delivering state-of-the-art models. AFE publishes applied scientific research frequently, and our members present regularly at leading industry conferences. AFE engages constantly with the sales team in client visits and meetings. Job Description You can help conduct research to build quantitative financial models and portfolio analytics that help managing most of the money of the world’s largest asset manager. You can bring all yourself to the job. From the top of the firm down we embrace the values, identities and ideas brought by our employees. We are looking for curious people with a strong background in data science, quantitative research and machine learning, have awesome problem-solving skills, insatiable appetite for learning and innovating, adding to BlackRock’s vibrant research culture. If any of this excites you, we are looking to expand our team. We currently have Data Scientist role with the AFE Investment AI (IAI) Team, India (Mumbai or Gurugram location). The securities market is undergoing a massive transformation as the industry is embracing machine learning and, more broadly, AI, to help evolve the investment process. Pioneering this journey at BlackRock, the team has better deliver applied AI investment analytics to help both BlackRock and Aladdin clients achieve scale through automation while safeguarding alpha generation. The IAI team combines AI / ML methodology and technology skills with deep subject matter expertise in fixed income, equity, and multi-asset markets, and the buyside investment process. We are building next generation liquidity, security similarity and pricing models leveraging our expertise in quantitative research, data science and machine learning. The models we build use innovative machine learning approaches, have real practical value and are used by traders and portfolio managers alike. Our models use cutting edge econometric/statistical methods and tools. The models themselves have real practical value and are used by traders, portfolio managers and risk managers representing different investment styles (fundamental vs. quantitative) and across different investment horizons. Research is conducted predominantly in Python and Scala, and implemented into production by a separate, dedicated team of developers. These models have a huge footprint of usage across the entire Aladdin client base, and so we place special emphasis on scalability and ensuring adherence to BlackRock’s rigorous standards of model governance and control. Background And Responsibilities We are looking to hire a Data Scientist with 4+ years’ experience to join AFE Investment AI India team focusing on Trading and Liquidity to work closely with other data scientists/researchers to support Risk Mangers, Portfolio Managers and Traders. We build cutting edge liquidity analytics using a wide range of ML algos and a broad array of technologies (Python, Scala, Spark/Hadoop, GCP, Azure). This role is a great opportunity to work closely with the Portfolio Managers, Risk Managers and Trading team, spanning areas such as: Design, develop, and maintain data pipelines to extract, transform, and load data from various sources into our data warehouse/lake. Work with data scientists and analysts to understand data needs and design appropriate data models. Implement data quality checks and ensure the accuracy and consistency of data throughout the processing pipeline. Perform analysis of large data sets comprising of market data, trading data and derived analytics. Design and develop model surveillance framework. Automate data processing tasks using scripting languages (e.g., Python, Scala) and orchestration tools (e.g., Airflow, Luigi). Utilize cloud-based data platforms (e.g., GCP, Azure, etc.) to manage and process large datasets efficiently. Implement the ML models/analytics for Trading/Liquidity and integrate into Aladdin analytical system in accordance with BlackRock’s model governance policy. Qualifications B.Tech / B.E. / M.Sc. degree in a quantitative discipline (Mathematics, Physics, Computer Science, Finance or similar area). MS/M.Tech. / PhD is a plus. Strong background in Mathematics, Statistics, Probability, Linear Algebra Knowledgeable about data mining, data analytics, data modeling Experience with data engineering tools and technologies (e.g., Apache Spark, Hadoop). Strong understanding of relational and non-relational databases (e.g., SQL, NoSQL). Proficiency in scripting languages for data manipulation and automation (e.g., Python, Scala). Experience working with cloud platforms for data storage and processing (e.g., Azure, GCP). Ability to work independently and efficiently in a fast-paced and team-oriented environment. Previous experience or knowledge in fixed income market and market liquidity is not required but a big plus. For professionals with no prior financial industry experience, this position is a unique opportunity to gain in-depth knowledge of the asset management process in a world-class organization. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 1 week ago
3.0 - 7.0 years
5 - 10 Lacs
Chennai
Work from Office
Req ID: 316318 We are currently seeking a Senior .NET/GCP Engineer (REMOTE) to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Senior .NET/GCP Engineer - Remote How You"™ll Help Us: A Senior Application Developer is first and foremost a software developer who specializes in .NET C
Posted 1 week ago
1.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Req ID: 321498 We are currently seeking a Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Duties"¢ Work closely with Lead Data Engineer to understand business requirements, analyse and translate these requirements into technical specifications and solution design. "¢ Work closely with Data modeller to ensure data models support the solution design "¢ Develop , test and fix ETL code using Snowflake, Fivetran, SQL, Stored proc. "¢ Analysis of the data and ETL for defects/service tickets (for solution in production ) raised and service tickets. "¢ Develop documentation and artefacts to support projects Minimum Skills Required"¢ ADF "¢ Fivetran (orchestration & integration) "¢ SQL "¢ Snowflake DWH
Posted 1 week ago
8.0 - 13.0 years
13 - 17 Lacs
Bengaluru
Work from Office
We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karntaka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelinesmandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring)Training & Certification "¢ Apache Kafka Administration Snowflake Fundamentals/Advanced Training "¢ Experience 8 years of experience in a technical role working with AWSAt least 2 years in a leadership or management role
Posted 1 week ago
4.0 - 5.0 years
6 - 10 Lacs
Chennai
Work from Office
We are currently seeking a Data Visualization Expert - Quick sight to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). What awaits you/ Job Profile Location Bangalore and Chennai, Hybrid mode,Immediate to 10 Days Notice period Develop reports using Amazon Quicksight Data Visualization DevelopmentDesign and develop data visualizations using Amazon Quicksight to present complex data in a clear and understandable format. Create interactive dashboards and reports that allow end-users to explore data and draw meaningful conclusions. Data AnalysisCollaborate with data analysts and business stakeholders to understand data requirements, gather insights, and transform raw data into actionable visualizations. Dashboard User Interface (UI) and User Experience (UX)Ensure that the data visualizations are user-friendly, intuitive, and aesthetically pleasing. Optimize the user experience by incorporating best practices in UI/UX design. Data IntegrationWork closely with data engineers and data architects to ensure seamless integration of data sources into Quicksight, enabling real-time and up-to-date visualizations. Performance OptimizationIdentify and address performance bottlenecks in data queries and visualization rendering to ensure quick and responsive dashboards. Data Security and GovernanceEnsure compliance with data security policies and governance guidelines when handling sensitive data within Quicksight. Training and DocumentationProvide training and support to end-users and stakeholders on how to interact with and interpret visualizations effectively. Create detailed documentation of the visualization development process. Stay Updated with Industry TrendsKeep up to date with the latest data visualization trends, technologies, and best practices to continuously enhance the quality and impact of visualizations. Using the Agile Methodology, attending daily standups and use of the Agile tools Collaborating with cross-functional teams and stakeholders to ensure data security, privacy, and compliance with regulations. using Scrum/Kanban Proficiency in Software Development best practices - Secure coding standards, Unit testing frameworks, Code coverage, Quality gates. Ability to lead and deliver change in a very productive way Lead Technical discussions with customers to find the best possible solutions. W orking closely with the Project Manager, Solution Architect and managing client communication (as and when required) What should you bring along Must Have Person should have relevant work experience in analytics, reporting and business intelligence tools. 4-5 years of hands-on experience in data visualization. Relatively 2-year Experience developing visualization using Amazon Quicksight. Experience working with various data sources and databases. Ability to work with large datasets and design efficient data models for visualization. Nice to Have AI Project implementation and AI methods. Must have technical skill Quick sight , SQL , AWS Good to have Technical skills Tableau, Data Engineer
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.
These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.
The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.
In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.
In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.
As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.