Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 11.0 years
14 - 17 Lacs
Pune
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back-end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 1 week ago
1.0 - 3.0 years
3 - 7 Lacs
Chennai
Hybrid
Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake
Posted 1 week ago
4.0 - 9.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined
Posted 1 week ago
1.0 - 3.0 years
2 - 5 Lacs
Chennai
Work from Office
Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices
Posted 1 week ago
8.0 - 12.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Happiest Minds Technologies Pvt.Ltd is looking for Sr Data and ML Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs. Spark ML Lib,Scala,Python,Databricks on AWS, Snowflake, GitLab, Jenkins, AWS DevOps CI/CD pipeline, Machine Learning, Airflow
Posted 1 week ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary Responsible for Building and maintaining high-performance data systems that enable deeper insights for all parts of our organization Responsible for Developing ETL/ELT pipelines for both batch and streaming data Responsible for Data flow for the real-time and analytics Improving data pipelines performance by implementing the industry’s best practices and different techniques for data parallel processing Responsible for the documentation, design, development and testing of Hadoop reporting and analytical application. Responsible for Technical discussion and finalization of the requirement by communicating effectively with Stakeholder. Responsible for converting functional requirements into the detailed technical design Responsible for adhering to SCRUM timelines and deliver accordingly Responsible for preparing the Unit/SIT/UAT test cases and log the results Responsible for Planning and tracking the implementation to closure Ability to drive enterprise-wide initiatives for usage of external data Envision enterprise-wide Entitlement’s platform and align it with Bank’s NextGen technology vision. Continually looking for process improvements Coordinate between various technical teams for various systems for smooth project execution starting from technical requirements discussion, overall architecture design, technical solution discussions, build, unit testing, regression testing, system integration testing, user acceptance testing, go live, user verification testing and rollback [if required] Prepare technical plan with clear milestone dates for technical tasks which will be input to the PM’s overall project plan. Coordinate with technical teams across technology on need basis who are not directly involved in the project example: Firewall network teams, DataPower teams, EDMP , OAM, OIM, ITSC , GIS teams etc. Responsible to support change management process Responsible to work alongside PSS teams and ensure proper KT sessions are provided to the support teams. Ensure to identify any risks within the project and get that recorded in Risk wise after discussion with business and manager. Ensure the project delivery is seamless with zero to negligible defects. Key Responsibilities Hands on experience with C++, .Net, SQL Language, jQuery, Web API & Service, Postgres SQL & MS SQL server, Azure Dev Ops & related, GitHub, ADO CI/CD Pipeline Should be transversal to handle Linux, PowerShell, Unix shell scripting, Kafka, Spark streaming Hadoop – Hive, Spark, Python, PYSpark Hands on experience of workflow/schedulers like NIFI/Ctrl-m Experience with Data loading tools like sqoop Experience and understanding of Object-oriented programming Motivation to learn innovative trade of programming, debugging, and deploying Self-starter, with excellent self-study skills and growth aspirations, capable of working without direction and able to deliver technical projects from scratch Excellent written and verbal communication skills. Flexible attitude, perform under pressure Ability to lead and influence direction and strategy of technology organization Test driven development, commitment to quality and a thorough approach to work A good team player with ability to meet tight deadlines in a fast-paced environment Guide junior’s developers and share the best practices Having Cloud certification will be an added advantage: any one of Azure/Aws/GCP Must have Knowledge & understanding of Agile principles Must have good understanding of project life cycle Must have Sound problem analysis and resolution abilities Good understanding of External & Internal Data Management & implications of Cloud usage in context of external data Strategy Develop the strategic direction and roadmap for CRES TTO, aligning with Business Strategy, ITO Strategy and investment priorities. Business Work hand in hand with Product Owners, Business Stakeholders, Squad Leads, CRES TTO partners taking product programs from investment decisions into design, specifications, solutioning, development, implementation and hand-over to operations, securing support and collaboration from other SCB teams Ensure delivery to business meeting time, cost and high quality constraints Support respective businesses in growing Return on investment, commercialisation of capabilities, bid teams, monitoring of usage, improving client experience, enhancing operations and addressing defects & continuous improvement of systems Thrive an ecosystem of innovation and enabling business through technology Governance Promote an environment where compliance with internal control functions and the external regulatory framework People & Talent Ability to work with other developers and assist junior team members. Identify training needs and take action to ensure company-wide compliance. Pursue continuing education on new solutions, technology, and skills. Problem solving with other team members in the project. Risk Management Interpreting briefs to create high-quality coding that functions according to specifications. Key stakeholders CRES Domain Clients Functions MT members, Operations and COO ITO engineering, build and run teams Architecture and Technology Support teams Supply Chain Management, Risk, Legal, Compliance and Audit teams External vendors Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the team to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Serve as a Director of the Board Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Other Responsibilities Embed Here for good and Group’s brand and values in team Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures Multiple functions (double hats) Skills And Experience Technical Project Delivery (Agile & Classic) Vendor Management Stakeholder Management Qualifications 5+ years of lead development role Should have managed a team of minimum 5 members Should have delivered multiple projects end to end Experience in Property Technology products (eg. Lenel, CBRE, Milestone etc) Strong analytical, numerical and problem-solving skills Should be able to understand and communicate technical details of the project Good communication skills – oral and written. Very good exposure to technical projects Eg: server maintenance, system administrator or development or implementation experience Effective interpersonal, relational skills to be able to coach and develop the team to deliver their best Certified Scrum Master About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Details 1Role -Senior Developer 2Required Technical Skill Set - Spark/Scala/Unix 3Desired Experience Range -5-8 years 4Location of Requirement - Pune Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) Minimum 4+ years of experience in development of Spark Scala Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc Experience in debugging the Spark code Working knowledge of basic UNIX commands and shell script Experience of Autosys, Gradle Good-to-Have Good analytical and debugging skills Ability to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time status Write clear and precise documentation / specification Work in an agile environment Create documentation and document all developed mappings SN Responsibility of / Expectations from the Role 1 Create Scala/Spark jobs for data transformation and aggregation 2 Produce unit tests for Spark transformations and helper methods 3 Write Scaladoc-style documentation with all code 4 Design data processing pipelines Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Your Skills & Experience: Strong expertise in Data Engineering highly recommended. • Overall experience of 4+years of relevant experience in Big Data technologies • Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage. • Strong experience in at least of the programming language Java, Scala, Python. Java preferable • Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc. • Well-versed and working knowledge with data platform related services on Azure/GCP. • Bachelor’s degree and year of work experience of 4+ years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position Show more Show less
Posted 1 week ago
5.0 years
4 - 9 Lacs
Bengaluru
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters ͏ Responsibilities: Design and implement the data modeling, data ingestion and data processing for various datasets Design, develop and maintain ETL Framework for various new data source Develop data ingestion using AWS Glue/ EMR, data pipeline using PySpark, Python and Databricks. Build orchestration workflow using Airflow & databricks Job workflow Develop and execute adhoc data ingestion to support business analytics. Proactively interact with vendors for any questions and report the status accordingly Explore and evaluate the tools/service to support business requirement Ability to learn to create a data-driven culture and impactful data strategies. Aptitude towards learning new technologies and solving complex problem. Qualifications: Minimum of bachelor’s degree. Preferably in Computer Science, Information system, Information technology. Minimum 5 years of experience on cloud platforms such as AWS, Azure, GCP. Minimum 5 year of experience in Amazon Web Services like VPC, S3, EC2, Redshift, RDS, EMR, Athena, IAM, Glue, DMS, Data pipeline & API, Lambda, etc. Minimum of 5 years of experience in ETL and data engineering using Python, AWS Glue, AWS EMR /PySpark and Airflow for orchestration. Minimum 2 years of experience in Databricks including unity catalog, data engineering Job workflow orchestration and dashboard generation based on business requirements Minimum 5 years of experience in SQL, Python, and source control such as Bitbucket, CICD for code deployment. Experience in PostgreSQL, SQL Server, MySQL & Oracle databases. Experience in MPP such as AWS Redshift, AWS EMR, Databricks SQL warehouse & compute cluster. Experience in distributed programming with Python, Unix Scripting, MPP, RDBMS databases for data integration Experience building distributed high-performance systems using Spark/PySpark, AWS Glue and developing applications for loading/streaming data into Databricks SQL warehouse & Redshift. Experience in Agile methodology Proven skills to write technical specifications for data extraction and good quality code. Experience with big data processing techniques using Sqoop, Spark, hive is additional plus Experience in data visualization tools including PowerBI, Tableau. Nice to have experience in UI using Python Flask framework anglular ͏ ͏ ͏ Mandatory Skills: Python for Insights. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 week ago
5.0 years
6 - 9 Lacs
Bengaluru
On-site
As a member of the Support organization, your focus is to deliver post-sales support and solutions to the Oracle customer base while serving as an advocate for customer needs. This involves resolving post-sales non-technical customer inquiries via phone and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. Career Level - IC3 As a Sr. Support Engineer, you will be the technical interface to customers, Original Equipment Manufacturers (OEMs) and Value-Added Resellers (VARs) for resolution of problems related to the installation, recommended maintenance and use of Oracle products. Have an understanding of all Oracle products in their competencies and in-depth knowledge of several products and/or platforms. Also, you should be highly experienced in multiple platforms and be able to complete assigned duties with minimal direction from management. In this position, you will routinely act independently while researching and developing solutions to customer issues. RESPONSIBILITIES: To manage and resolve Service Requests logged by customers (internal and external) on Oracle products and contribute to proactive support activities according to product support strategy and model Owning and resolving problems and managing customer expectations throughout the Service Request lifecycle in accordance with global standards Working towards, adopting and contributing to new processes and tools (diagnostic methodology, health checks, scripting tools, etc.) Contributing to Knowledge Management content creation and maintenance Working with development on product improvement programs (testing, SRP, BETA programs etc) as required Operating within Oracle business processes and procedures Respond and resolve customer issues within Key Performance Indicator targets Maintaining product expertise within the team Maintain an up-to-date and in-depth knowledge of new products released in the market for supported product QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering or related technical field 5+ years of proven professional and technical experience in Big Data Appliance (BDA), Oracle Cloud Infrastructure (OCI), Linux OS and within areas like Cloudera distribution for Hadoop (CDH), HDFS, YARN, Spark, Hive, Sqoop, Oozie and Intelligent Data Lake. Excellent verbal and written skills in English SKILLS & COMPETENCIES: Minimum technical skills: As a member of the Big Data Appliance (BDA), the focus is to troubleshoot highly complex technical issues related to the Big Data Appliance and within areas like Cloudera distribution for Hadoop (CDH), HDFS, YARN, Spark, Hive, Sqoop, Oozie and Intelligent Data Lake. Have good hands on experience in Linux Systems, Cloudera Hadoop architecture, administration and troubleshooting skills with good knowledge of different technology products/services/processes. Responsible for resolving complex issues for BDA (Big Data Appliance) customers. This would include resolving issues pertaining to Cloudera Hadoop, Big Data SQL, BDA upgrades/patches and installs. The candidate will also collaborate with other teams like Hardware, development, ODI, Oracle R, etc to help resolve customer’s issues on the BDA machine. The candidate will also be responsible for interacting with customer counterparts on a regular basis and serving as the technology expert on the customer’s behalf. Experience in multi-tier architecture environment required. Fundamental understanding of computer networking, systems, and database technologies. Personal competencies: Desire to learn, or expand knowledge, about Oracle database and associated products Customer focus Structured Problem Recognition and Resolution Experience of contributing to a shared knowledge base Experience of Support level work, like resolving customer problems and managing customer expectations, and escalations. Communication Planning and organizing Working globally Quality Team Working Results oriented
Posted 1 week ago
7.0 years
0 Lacs
India
Remote
Kindly share your resume lakshmi.b@iclanz.com or hr@iclanz.com Position: Lead Data Engineer - Health Care domain Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities • Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies • Monitoring active ETL jobs in production. • Build out data lineage artifacts to ensure all current and future systems are properly documented • Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes • Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies • Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations • Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. • Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. Required Skills • This job has no supervisory responsibilities. • Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years’ experience in business analytics, data science, software development, data modeling or data engineering work • 5+ years’ experience with a strong proficiency with SQL query/development skills • Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks • Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) • Experience working in the healthcare industry with PHI/PII • Creative, lateral, and critical thinker • Excellent communicator • Well-developed interpersonal skills • Good at prioritizing tasks and time management • Ability to describe, create and implement new solutions • Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) • Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) • Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume). Details Required for Submission: Requirement Name: First Name Last Name Email id: Best Number: Current Organization / Previous Organization you Worked (last date): Currently working on a project: Total Experience: Relevant Experience Primary Skills Years of Experience Ratings (out of 10) Data Engineer : ETL : Healthcare (PHI/PII): Fivetran: DBT: LinkedIn profile: Comfortable to work from 03.00 pm to 12.00 am IST? Communication: Education Details – Degree & Passed out year: Notice Period: Vendor Company Name: iClanz Inc expected Salary: Current Location / Preferred Location: Show more Show less
Posted 1 week ago
6.0 - 11.0 years
18 - 25 Lacs
Hyderabad
Work from Office
SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools) along with GCP. Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.
Posted 1 week ago
5.0 - 9.0 years
11 - 12 Lacs
Bengaluru
Work from Office
5 to 9 years experience Nice to have Worked in hp eco system (FDL architecture) Databricks + SQL combination is must EXPERIENCE 6-8 Years SKILLS Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): databricks, SQL
Posted 1 week ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a Spark, Big Data - ETL Tech Lead for Commercial Card’s Global Data Repository development team. The successful candidate will interact with the Development Project Manager, the development, testing, and production support teams, as well as other departments within Citigroup (such as the System Administrators, Database Administrators, Data Centre Operations, and Change Control groups) for TTS platforms. He/she requires exceptional communication skills across both technology and the business and will have a high degree of visibility. The candidate will be a rigorous technical lead with a strong understanding of how to build scalable, enterprise level global applications. The ideal candidate will be dependable and resourceful software professional who can comfortably work in a large development team in a globally distributed, dynamic work environment that fosters diversity, teamwork and collaboration. The ability to work in high pressured environment is essential. Responsibilities: Lead the design and implementation of large-scale data processing pipelines using Apache Spark on BigData Hadoop Platform. Develop and optimize Spark applications for performance and scalability. Responsible for providing technical leadership of multiple large scale/complex global software solutions. Integrate data from various sources, including Couchbase, Snowflake, and HBase, ensuring data quality and consistency. Experience of developing teams of permanent employees and vendors from 5 – 15 developers in size Build and sustain strong relationships with the senior business leaders associated with the platform Design, code, test, document and implement application release projects as part of development team. Work with onsite development partners to ensure design and coding best practices. Work closely with Program Management and Quality Control teams to deliver quality software to agreed project schedules. Proactively notify Development Project Manager of risks, bottlenecks, problems, issues, and concerns. Compliance with Citi's System Development Lifecycle and Information Security requirements. Oversee development scope, budgets, time line documents Monitor, update and communicate project timelines and milestones; obtain senior management feedback; understand potential speed bumps and client’s true concerns/needs. Stay updated with the latest trends and technologies in big data and cloud computing. Mentor and guide junior developers, providing technical leadership and expertise. Key Challenges: Managing time and changing priorities in a dynamic environment Ability to provide quick turnaround to software issues and management requests Ability to assimilate key issues and concepts and come up to speed quickly Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or equivalent Minimum 10 years of Proven experience in developing and managing big data solutions using Apache Spark. Having strong hold on Spark-core, Spark-SQL & Spark Streaming Minimum 6 years of experience in leading globally distributed teams successfully. Strong programming skills in Scala, Java, or Python. Hands on experience on Technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume etc. Proficiency in SQL and experience with relational (Oracle/PL-SQL) and NoSQL databases like mongoDB. Demonstrated people and technical management skills. Demonstrated excellent software development skills. Strong experiences in implementation of complex file transformations like positional, xmls. Experience in building enterprise system with focus on recovery, stability, reliability, scalability and performance. Experience in working on Kafka, JMS / MQ applications. Experience in working multiple OS (Unix, Linux, Win) Familiarity with data warehousing concepts and ETL processes. Experience in performance tuning of large technical solutions with significant volumes Knowledge of data modeling, data architecture, and data integration techniques. Knowledge on best practices for data security, privacy, and compliance. Key Competencies: Excellent organization skills, attention to detail, and ability to multi-task Demonstrated sense of responsibility and capability to deliver quickly Excellent communication skills. Clearly articulating and documenting technical and functional specifications is a key requirement. Proactive problem-solver Relationship builder and team player Negotiation, difficult conversation management and prioritization skills Flexibility to handle multiple complex projects and changing priorities Excellent verbal, written and interpersonal communication skills Good analytical and business skills Promotes teamwork and builds strong relationships within and across global teams Promotes continuous process improvement especially in code quality, testability & reliability Desirable Skills: Experience in Java, Spring, ETL Tools like Talend, Ab Initio is a plus. Experience of migrating functionality from ETL tools to Spark. Experience/knowledge on Cloud technologies AWS, GCP. Experience in Financial industry ETL Certification, Project Management Certification Experience with Commercial Cards applications and processes would be advantageous Experience with Agile methodology This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for energetic, high-performing and highly skilled Java + Big Data Engineers to help shape our technology and product roadmap. You will be part of the fast-paced, entrepreneurial Enterprise Personalization portfolio focused on delivering the next generation global marketing capabilities. This team is responsible for building products that power Merchant Offers personalization for Amex card members. Job Description: - Demonstrated leadership in designing sustainable software products, setting development standards, automated code review process, continuous build and rigorous testing etc - Ability to effectively lead and communicate across 3rd parties, technical and business product managers on solution design - Primary focus is spent writing code, API specs, conducting code reviews & testing in ongoing sprints or doing proof of concepts/automation tools - Applies visualization and other techniques to fast track concepts - Functions as a core member of an Agile team driving User story analysis & elaboration, design and development of software applications, testing & builds automation tools - Works on a specific platform/product or as part of a dynamic resource pool assigned to projects based on demand and business priority - Identifies opportunities to adopt innovative technologies Qualification: - Bachelor's degree in computer science, computer engineering, other technical discipline, or equivalent work experience - 5+ years of software development experience - 3-5 years of experience leading teams of engineers - Demonstrated experience with Agile or other rapid application development methods - Demonstrated experience with object-oriented design and coding - Demonstrated experience on these core technical skills (Mandatory) - Core Java, Spring Framework, Java EE - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark - Relational Database (PostGreS / MySQL / DB2 etc) - Data Serialization techniques (Avro) - Cloud development (Micro-services) - Parallel & distributed (multi-tiered) systems - Application design, software development and automated testing - Demonstrated experience on these additional technical skills (Nice to Have) - Unix / Shell scripting - Python / Scala - Message Queuing, Stream processing (Kafka) - Elastic Search - AJAX tools/ Frameworks. - Web services , open API development, and REST concepts - Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing and Junit. Show more Show less
Posted 1 week ago
6.0 - 11.0 years
10 - 20 Lacs
Visakhapatnam, Hyderabad, Bengaluru
Work from Office
Should have working experience in Spark/Scala , AWS ,Bigdata Environments Hadoop, Hive, Sqoop,Python Scripting or Java Programming (Nice to Have) and willing to relocate Hyderabad
Posted 1 week ago
10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation – e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3 rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Show more Show less
Posted 1 week ago
7.0 years
3 - 10 Lacs
Bengaluru
On-site
Job ID: 28021 Location: Bangalore, IN Area of interest: Technology Job type: Regular Employee Work style: Office Working Opening date: 28 May 2025 Job Summary Responsible for the design and development analytical data model, reports and dashboards Provide design specifications to both onsite & offshore teams Develop functional specifications and document the requirements clearly Develop ETL, Reports, data mapping documents and technical design specifications for the project deliverables. Analyse the business requirements and come up with the end-to-end design including technical implementation. Key Responsibilities Strategy Responsible for the design and development analytical data model, reports and dashboards Provide design specifications to both onsite & offshore teams Develop functional specifications and document the requirements clearly Develop data mapping documents and technical design specifications for the project deliverables. Analyse the business requirements and come up with the end-to-end design including technical implementation. Expert in Power BI, MSTR, Informatica, Hadoop Platform Ecosystem, SQL, Java, R, Python, Java Script, Hive, Spark, Linux Scripts Good knowledge on to Install, upgrade, administration, and troubleshooting of ETL & Reporting tools such as Power BI, MSTR, Informatica, Oracle and Hadoop Implement performance tuning techniques for Reports, ETL and data Migration Develop ETL procedures to ensure conformity, compliance with standards and translate business rules and functionality requirements into ETL procedures. Assess and review the report performance, come up with performance optimization techniques using VLDB settings and explain plan. Develop scripts to automate the production deployments. Conduct product demonstrations and user training sessions to business users Work with testing teams to improve the quality of the testing in adopting the automated testing tools and management of the application environments. Business Collaborate and partner with product owner, business users and senior business stakeholders to understand the data and reporting requirements of the business and clearly document it for further analysis Work closely with architects and infrastructure teams and review the solutions Interact with support teams periodically and get input on the various business users’ needs Provide all the required input and assistance to the business users in performing the data validation and ensure that the data and reporting delivered with accurate numbers. Processes Process oriented and experienced in onsite-offshore project delivery, using agile methodology and best practices Well versed with agile based project delivery methodology. Should have successfully implemented or delivered projects using best practices on technology delivery and project release automation in banking and financing industry Deployment automation for Oracle Databse and Hadoop , Informatica workflow WorkFlow, Integration, BI layer including MicroStrategy and PBI components to the feasible extent Actively participate in discussions with business users and seek endorsements and approvals wherever necessary w.r.t technology project delivery. People & Talent Minimum 7 years of experience in the business intelligence and data warehouse domain. Create project estimations, solution and design documentation, operational guidelines and production handover documentation Should have excellent technical, analytical, interpersonal and delivery capabilities in the areas of complex reporting for banking domain, especially in the area of Client Analytics and CRM. Full Life-cycle Business Intelligence (BI) and Data Warehousing project experience, starting with requirements analysis, proof-of-concepts, design, development, testing, deployment and administration Shall be a good team player with excellent written and verbal communications. Process oriented and experienced in onsite-offshore project delivery, using agile methodology and best practices Should be able play an Individual Contributor role Risk Management Assess and evaluate the risks that are related to the project delivery and update the stakeholders with appropriate remediation and mitigation approach. Review the technical solutions and deliverables with architects and key technology stakeholders and ensure that the deliverables are adhering to the risk governance rules Governance Work with Technology Governance and support teams and establish standards for simplifying the existing Microstrategy reports and Informatica batch programs Take end to end ownership of managing and administering the Informatica & Hadoop , MSTR and Power BI. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. [Fill in for regulated roles] Lead the [country / business unit / function/XXX [team] to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * [Insert local regulator e.g. PRA/FCA prescribed responsibilities and Rationale for allocation]. [Where relevant - Additionally, for subsidiaries or relevant non -subsidiaries] Serve as a Director of the Board of [insert name of entities] Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Key stakeholders Business and Operations (Product Owners) Sales Enablement Client Coverage Reporting Technology Services Teams Production Support Teams Skills and Experience Design, development of ETL procedures using Informatica PowerCenter Performance tuning of star schemas to optimize load and query performance of SQL queries. Hive, HiveQL, HDFS, Scala, Spark, Sqoop, HBase, YARN, Presto , Dremio Experience in Oracle 11g, 19c. Strong knowledge and understanding of SQL and ability to write SQL, PL/SQL BI and analytical dashboards , reporting design and development using PBI tools and MicroStrategy Business Intelligence product suite (MicroStrategy Intelligence Server, MicroStrategy Desktop, MicroStrategy Web, MicroStrategy Architect, MicroStrategy Object Manager, MicroStrategy Command Manager, MicroStrategy Integrity Manager, MicroStrategy Office, Visual Insight, Mobile Development Design of dimensional modelling like star and snowflake schema Setting up connections to Hadoop big data (data lake) cluster through Kerberos authentication mechanisms Banking and Finance specific to financial market, Collateral, Trade Life Cycle , Operational CRM, Analytical CRM and client related reporting Design and implementation of Azure Data Solution and Microsoft Azure Cloud Qualifications
Posted 1 week ago
2.0 - 4.0 years
6 - 7 Lacs
Chennai
On-site
The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: The person should have 2 to 4 years of Application development experience through full lifecycle of Java and Big data applications . The primary area of Experience with Core Java/J2EE Application w The candidate should be commendable in Data Structures and Algorithms. He should have worked on Core Application Development of complex size encompassing all areas of Java/J2EE. She/He should Thorough knowledge and hands on experience in following technologies Hadoop, Map Reduce Framework, Spark, YARN, Sqoop, Pig , Hue, Unix, Java, Sqoop, Impala, Cassandra on Mesos. Cloudera certification (CCDH) is an added advantage. Work in an agile environment following through the best practices of agile Scrum. Expertise in designing and optimizing the software solutions for performance and stability. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: Data Architect (C2) Job Summary The Data Architect will provide technical expertise in analysis, design, development, rollout and maintenance of enterprise data models and solutions Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective. Understands and leverages best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Provides data understanding and coordinate data related activities with other data management groups such as master data management, data governance and metadata management. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Essential Duties Design and develop conceptual / logical / physical data models for building large scale data lake and data warehouse solutions Understanding of data integration processes (batch or real-time) using tools such as Informatica PowerCenter and/or Cloud, Microsoft SSIS, MuleSoft, DataStage, Sqoop, etc. Create functional & technical documentation – e.g. data integration architecture documentation, data models, data dictionaries, data integration specifications, data testing plans, etc. Collaborate with business users to analyse and test requirements Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Architecture Assist with and support setting the data architecture direction (including data movement approach, architecture / technology strategy, and any other data-related considerations to ensure business value,) ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Education & Experience 5-10 years of Enterprise Data Modelling Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Expert proficiency in Data Contracts, Data Modelling, and Data Vault 2.0. Experience with major database platforms (e.g. Oracle, SQL Server, Teradata, etc.) Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) 3-5 years of management experience required 3-5 years consulting experience preferred Bachelor’s degree or equivalent experience, Master’s Degree Preferred Experience in data analysis and profiling Strong data warehousing and OLTP systems from a modelling and integration perspective Strong understanding of data integration best practices and concepts Strong development experience under Unix and/or Windows environments Strong SQL skills required scripting (e.g., PL/SQL) preferred Strong Knowledge of all phases of the system development life cycle Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, Google Cloud) Preferred Skills & Experience Comprehensive understanding of relational databases and technical documentation Ability to analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to transform business requirements into technical requirement documents. Ability to run conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Can create documentation and presentations such that the they “stands on their own.” Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Position : Lead Data Engineer Experience : 7+ Years Location : Remote Summary We are looking for a Lead Data Engineer responsible for ETL processes and documentation in building scalable data warehouses and analytics capabilities. This role involves maintaining existing systems, developing new features, and implementing performance improvements. Key Responsibilities Build ETL pipelines using Fivetran and dbt for internal and client projects across platforms like Azure , Salesforce , and AWS . Monitor active production ETL jobs. Create and maintain data lineage documentation to ensure complete system traceability. Develop design/mapping documents for clear and testable development, QA, and UAT. Evaluate and implement new data integration tools based on current and future requirements. Identify and eliminate process redundancies to streamline data operations. Work with the Data Quality Analyst to implement validation checks across ETL jobs. Design and implement large-scale data warehouses , BI solutions, and Master Data Management (MDM) systems, including Data Lakes/Data Vaults . Required Skills & Qualifications Bachelor's degree in Computer Science, Software Engineering, Math, or a related field. 6+ years of experience in data engineering, business analytics, or software development. 5+ years of experience with strong SQL development skills . Hands-on experience in Snowflake and Azure Data Factory (ADF) . Proficient in ETL toolsets such as Informatica , Talend , dbt , and ADF . Experience with PHI/PII data and working in the healthcare domain is preferred. Strong analytical and critical thinking skills. Excellent written and verbal communication. Ability to manage time and prioritize tasks effectively. Familiarity with scripting and open-source platforms (e.g., Python, Java, Linux, Apache, Chef ). Experience with BI tools like Power BI , Tableau , or Cognos . Exposure to Big Data technologies : Snowflake (Snowpark) , Apache Spark , Hadoop , Hive , Sqoop , Pig , Flume , HBase , MapReduce . Show more Show less
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Impetus Impetus Technologies is a digital engineering company focused on delivering expert services and products to help enterprises achieve their transformation goals. We solve the analytics, AI, and cloud puzzle, enabling businesses to drive unmatched innovation and growth. Founded in 1991, we are cloud and data engineering leaders providing solutions to fortune 100 enterprises, headquartered in Los Gatos, California, with development centers in NOIDA, Indore, Gurugram, Bengaluru, Pune, and Hyderabad with over 3000 global team members. We also have offices in Canada and collaborate with a number of established companies, including American Express, Bank of America, Capital One, Toyota, United Airlines, and Verizon. Experience- 3-8 years Location- Gurgaon & Bangalore Job Description You should have extensive production experience in GCP, Other cloud experience would be a strong bonus. - Strong background in Data engineering 2-3 Years of exp in Big Data technologies including, Hadoop, NoSQL, Spark, Kafka etc. - Exposure to enterprise application development is a must Roles & Responsibilities Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Good to have knowledge on Cloud Composer, Cloud SQL, Big Table, Cloud Function. Strong experience in Big Data technologies – Hadoop, Sqoop, Hive and Spark including DevOPs. Good hands on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built, Anthos. Ability to drive the deployment of the customers’ workloads into GCP and provide guidance, cloud adoption model, service integrations, appropriate recommendations to overcome blockers and technical road-maps for GCP cloud implementations. Experience with technical solutions based on industry standards using GCP - IaaS, PaaS and SaaS capabilities. Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. Act as a subject-matter expert OR developer around GCP and become a trusted advisor to multiple teams. Show more Show less
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: The person should have 2 to 4 years of Application development experience through full lifecycle of Java and Big data applications . The primary area of Experience with Core Java/J2EE Application w The candidate should be commendable in Data Structures and Algorithms. He should have worked on Core Application Development of complex size encompassing all areas of Java/J2EE. She/He should Thorough knowledge and hands on experience in following technologies Hadoop, Map Reduce Framework, Spark, YARN, Sqoop, Pig , Hue, Unix, Java, Sqoop, Impala, Cassandra on Mesos. Cloudera certification (CCDH) is an added advantage. Work in an agile environment following through the best practices of agile Scrum. Expertise in designing and optimizing the software solutions for performance and stability. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has seen a rise in demand for professionals skilled in Sqoop, a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases. Job seekers with expertise in Sqoop can explore various opportunities in the Indian job market.
The average salary range for Sqoop professionals in India varies based on experience levels: - Entry-level: Rs. 3-5 lakhs per annum - Mid-level: Rs. 6-10 lakhs per annum - Experienced: Rs. 12-20 lakhs per annum
Typically, a career in Sqoop progresses as follows: 1. Junior Developer 2. Sqoop Developer 3. Senior Developer 4. Tech Lead
In addition to expertise in Sqoop, professionals in this field are often expected to have knowledge of: - Apache Hadoop - SQL - Data warehousing concepts - ETL tools
As you explore job opportunities in the field of Sqoop in India, make sure to prepare thoroughly and showcase your skills confidently during interviews. Stay updated with the latest trends and advancements in Sqoop to enhance your career prospects. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2