Home
Jobs

585 Teradata Jobs - Page 20

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Ab Initio Data Engineer We are looking for Ab Initio Data Engineer to be able to design and build Ab Initio-based applications across Data Integration, Governance & Quality domains for Compliance Risk programs. The individual will be working with both Technical Leads, Senior Solution Engineers and prospective Application Managers in order to build applications, rollout and support production environments, leveraging Ab Initio tech-stack, and ensuring the overall success of their programs. The programs have a high visibility, and are fast paced key initiatives, which generally aims towards acquiring & curating data and metadata across internal and external sources, provide analytical insights and integrate with other Citi systems. Technical Stack: Ab Initio 4.0.x software suite – Co>Op, GDE, EME, BRE, Conduct>It, Express>It, Metadata>Hub, Query>it, Control>Center, Easy>Graph Big Data – Cloudera Hadoop, Hive, Yarn Databases - Oracle 11G/12C, Teradata, MongoDB, Snowflake Others – JIRA, Service Now, Linux, SQL Developer, AutoSys, and Microsoft Office Responsibilities: Ability to design and build Ab Initio graphs (both continuous & batch) and Conduct>it Plans, and integrate with portfolio of Ab Initio softwares. Build Web-Service and RESTful graphs and create RAML or Swagger documentations. Complete understanding and analytical ability of Metadata Hub metamodel. Strong hands on Multifile system level programming, debugging and optimization skill. Hands on experience in developing complex ETL applications. Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyze data issues Strong in UNIX Shell/Perl Scripting. Build graphs interfacing with heterogeneous data sources – Oracle, Snowflake, Hadoop, Hive, AWS S3. Build application configurations for Express>It frameworks – Acquire>It, Spec-To-Graph, Data Quality Assessment. Build automation pipelines for Continuous Integration & Delivery (CI-CD), leveraging Testing Framework & JUnit modules, integrating with Jenkins, JIRA and/or Service Now. Build Query>It data sources for cataloguing data from different sources. Parse XML, JSON & YAML documents including hierarchical models. Build and implement data acquisition and transformation/curation requirements in a data lake or warehouse environment, and demonstrate experience in leveraging various Ab Initio components. Build Autosys or Control Center Jobs and Schedules for process orchestration Build BRE rulesets for reformat, rollup & validation usecases Build SQL scripts on database, performance tuning, relational model analysis and perform data migrations. Ability to identify performance bottlenecks in graphs, and optimize them. Ensure Ab Initio code base is appropriately engineered to maintain current functionality and development that adheres to performance optimization, interoperability standards and requirements, and compliance with client IT governance policies Build regression test cases, functional test cases and write user manuals for various projects Conduct bug fixing, code reviews, and unit, functional and integration testing Participate in the agile development process, and document and communicate issues and bugs relative to data standards Pair up with other data engineers to develop analytic applications leveraging Big Data technologies: Hadoop, NoSQL, and In-memory Data Grids Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment Perform other duties and/or special projects as assigned Qualifications: Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 5 years of experience Minimum 5 years of extensive experience in design, build and deployment of Ab Initio-based applications Expertise in handling complex large-scale Data Lake and Warehouse environments Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Years of Experience: Candidates with 4+ years of experience in developing and delivering scalable big data pipelines using Apache Spark and Databricks on AWS. ` Position Requirements Must Have : Build and maintain scalable data pipelines using Databricks and Apache Spark. Develop and optimize ETL/ELT processes for structured and unstructured data. Knowledge of Lakehouse architecture for efficient data storage, processing, and analytics. Orchestrating ETL/ELT Pipelines: Design and manage data workflows using Databricks Workflows, Jobs API. Worked with AWS Data Services (S3, Lambda, CloudWatch) for seamless integration. Performance Optimization: Optimize queries using pushdown capabilities and indexing strategies. Worked on data governance with Unity Catalog, security policies, and access controls. Monitor, troubleshoot, and improve Databricks jobs and clusters. Exposure to end-to-end implementation of migration projects to AWS Cloud AWS & Python Expertise with hands-on cloud development. Orchestration: Airflow Code Repositories: Git, GitHub. Strong in writing SQL Cloud Data Migration: Deep understanding of processes. Strong Analytical, Problem-Solving & Communication Skills. Good To Have Knowledge / Skills Experience in Teradata, DataStage , SSIS Knowledge of Databricks Delta Live Table. Knowledge of Delta Lake. Streaming: Kafka, Spark Streaming. CICD: Jenkins IaC & Automation: Terraform for Databricks deployment. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Years of Experience: Candidates with 8+ years of experience in architecting and delivering scalable big data pipelines using Apache Spark and Databricks on AWS. Position Requirements Must Have : Design, build, and maintain scalable data pipelines using Databricks and Apache Spark. Good knowledge on Medallion Architecture in Databricks Lakehouse Develop and optimize ETL/ELT processes for structured and unstructured data. Implement Lakehouse architecture for efficient data storage, processing, and analytics. Orchestrating ETL/ELT Pipelines: Design and manage data workflows using Databricks Workflows, Jobs API. Work with AWS Data Services (S3, Lambda, CloudWatch) for seamless integration. Performance Optimization: Optimize queries using pushdown capabilities and indexing strategies. Implement data governance with Unity Catalog, security policies, and access controls. Collaborate with data scientists, analysts, and engineers to enable advanced analytics. Monitor, troubleshoot, and improve Databricks jobs and clusters. Strong expertise in end-to-end implementation of migration projects to AWS Cloud Should be aware of Data Management concepts and Data Modelling AWS & Python Expertise with hands-on cloud development. Spark Performance Tuning: Core, SQL, and Streaming. Orchestration: Airflow Code Repositories: Git, GitHub. Strong in writing SQL Cloud Data Migration: Deep understanding of processes. Strong Analytical, Problem-Solving & Communication Skills. Good To Have Knowledge / Skills Experience in Teradata, DataStage , SSIS, Mainframe(Cobol, JCL, Zeke Scheduler) Knowledge on Lakehouse Federation Knowledge of Delta Lake. Knowledge of Databricks Delta Live Table. Streaming: Kafka, Spark Streaming. CICD : Jenkins IaC & Automation: Terraform for Databricks deployment. Knowlege on integrating 3party APIs to Databricks. Knowledge of Transport & Mobility domain. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bhubaneshwar, Odisha, India

On-site

Linkedin logo

Job Description Senior Associate / Manager / IICS Developer Job Location: Bhubaneswar The IICS Developer will be responsible for designing, developing, and implementing cloud-based ETL (Extract, Transform, Load) solutions using Informatica Intelligent Cloud Services (IICS). The role involves working with Cloud Data Integration (CDI), Cloud Application Integration (CAI), and other Informatica tools to enable seamless data movement across cloud and on-premise environments. Key Responsibilities: ? Design and develop ETL pipelines and data integration workflows using IICS (CDI, CAI, and Cloud Data Quality). ? Extract, transform, and load data across cloud platforms (AWS, Azure, GCP) and on-premise databases. ? Work with REST/SOAP APIs to integrate cloud applications. ? Optimize ETL performance and ensure efficient data processing and workflow execution. ? Collaborate with data architects, analysts, and business teams to gather and understand data requirements. ? Implement error handling, logging, and performance tuning in ETL processes. ? Maintain and enhance data quality and governance within the organization. ? Work with various databases like Snowflake, Redshift, Teradata, Oracle, SQL Server, etc. for data integration. ? Develop automation scripts using Unix/Linux shell scripting or Python for workflow scheduling. Required Skills & Qualifications: ? Technical Skills: ? Strong experience in IICS (CDI, CAI, CDQ) and PowerCenter. ? Hands-on expertise in ETL development, data transformation, and integration. ? Proficiency in SQL, PL/SQL, and working with relational & cloud databases (Snowflake, Redshift, Teradata, etc.). ? Experience with API-based integrations (REST, SOAP). ? Exposure to cloud platforms (AWS, Azure, GCP) and working with cloud-native databases. ? Strong knowledge of data warehousing concepts, ETL methodologies, and best practices. ? Experience with performance tuning and troubleshooting ETL workflows. Skills Required RoleIICS Developer - SA/M Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates Employment TypeFull Time, Permanent Key Skills IICS ETL CDI AWS Other Information Job CodeGO/JC/21456/2025 Recruiter NameKamlesh Kumar Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Corporate Technology , you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Provide end-to-end application and infrastructure service delivery for the successful business operations of the firm. Execute policies and procedures that ensure Engineering and Operational stability and availability. Monitor production environments for anomalies, address issues, and drive evolution of utilization of standard observability tools. Escalate and communicate issues and solutions to the business and technology stakeholders, actively participating from incident resolution to service restoration. Lead incident, problem, and change management in support of full stack technology systems, applications, or infrastructure Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Certified software engineering professional with 8+ years applied experience. Proficiency on AWS Cloud Platform, with system design, application/tools/interface/modules development, testing, problem solving and operational stability. Hands-on infrastructure as code tools, such as Terraform, Helm chart. Design, deploy and manage Kubernetes clusters across various environments (on-premises, AWS cloud, hybrid) as a Kubernetes platform Engineer. Experience with K8s services, deployments, failover mechanism, networking, security polices, including CNI plugins, ingress controllers and service meshes. Minimum 4+ years of experience in Terraform, Python, Shell scripting technologies. Hands-on infrastructure as code tools, Terraform, Helm chart, independently design, build, test and deploy the code. Hands-on experience with Continuous Integration and Delivery tools like, Jules / Spinnaker / Jenkins integration Ability to set up monitoring, logging, and alerting for Kubernetes clusters using Grafana, Prometheus and Splunk Should have strong SQL skills; PostgreSQL, AWS RDS, Aurora, Teradata is preference but experience in any other RDBMS technology would suffice Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Exposure to cloud technologies ABOUT US Show more Show less

Posted 3 weeks ago

Apply

5.0 - 7.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / Manufacturing / Sales / Finance / Supplier / Customer domains ) Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP

Posted 3 weeks ago

Apply

7.0 - 11.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description: Senior Principal Data Engineer - Data Engineering Value Preposition Responsible for building the data platform that support data integrations: building data pipelines to share data enterprise data, designing and building cloud solutions with appropriate data access, data security, data privacy and data governance Lead a team of Data Engineers, to maintain the platform constantly upkeeping it to be in-line with new technologies. Use agile engineering practices and various data development technologies to rapidly develop creative and efficient data products Job Details Position Title: Senior Principal Data Engineer Career Level: P5 Job Category: Vice President Role Type: Hybrid Job Location: Bangalore About the Team: The data engineering team is community of dedicated professionals committed to designing, building, and maintaining data platform solutions for the organization. Impact (Job Summary/Why this Role Matters) Enterprise data warehouse supports several critical business functions for the bank including Regulatory Reporting, Finance, Risk steering, and Customer 360. This role is vital for building and maintaining enterprise data platform, data processes, and to support business objectives. Our values inclusivity, transparency, and excellence drive everything we do. Join us and make a meaningful impact on the organization. Key Deliverables (Duties and Responsibilities) As a Senior Principal Data Engineer, you will be responsible for building and maintaining the data platform that supports data integrations: enriching data pipelines to share enterprise data, designing, building, and maintaining a data platform such as Enterprise Data Warehouse, Operational Data Store or Data Marts etc. with appropriate data access, data security, data privacy and data governance. Demonstrate technical knowledge and leadership in software development, data engineering frameworks and best practices. Building a strategy and execution plan for the multiple programs/initiatives across the organization Helping the teams in architecting and designing large scale applications Acting as a trusted advisor to leaders (Directors / Sr. Directors) on strategic technology and data solution directions Participates on the Change Advisory Board (CAB) and ensures effective change control is implemented for all infrastructure and/or application installations, rollbacks, and updates. Collaborate with the Data Architects, Solution Architects & Data Modelers to enhance the Data platform design, constantly identify a backlog of tech debts in line with identified upgrades and provide technical solutions & implement the same. Collaborate with IT and CSO teams to ensure compliance with data governance, privacy and security policies and regulations. Manage deliverables of developers, perform design reviews and coordinate release management activities. Drive automation, identify inefficiencies, optimize processes and data flows, and recommend improvements. Use agile engineering practices and various data development technologies to rapidly develop and implement efficient data products. Work with global technology teams across different time zones (primarily US) to deliver timely business value. Skills and Qualification (Functional and Technical Skills) Functional Skills: Leadership: Driving strategic and technical initiatives for data engineering team, provide guidance and mentorship. Business/Domain Knowledge: Good understanding of application systems and business domains Partnership and Collaboration: Develop and maintain partnership with business and IT stakeholders Communication: Excellent verbal, written, and interpersonal communication skills. Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality. Team Player: Support peers, team, and department management. Attention to Detail: Ensure accuracy and thoroughness in all tasks. Technical/Business Skills: Data Engineering: Experience in designing and building Data Warehouse and Business Intelligence dashboards. Extensive knowledge of data warehouse principles, design, and concepts. Technical expertise working in large scale Data Warehousing applications and databases such as Oracle, Netezza, Teradata, and SQL Server. Deep technical knowledge in data engineering frameworks and best practices. Experience with public cloud-based data platforms especially Snowflake, AWS, and machine learning capabilities such as Sagemaker, DataRobot. Data integration skills: Expertise in creating and maintaining ETL processes and architecting complex data pipelines - knowledge of data modeling techniques and high-volume ETL/ELT design. Solutions using any industry leading ETL tools such as SAP Business Objects Data Services (BODS), Informatica Cloud Data Integration Services (IICS), IBM Data Stage. Knowledge of ELT tools such as DBT, Fivetran, and AWS Glue Data Model: Expert knowledge of Logical and Physical Data Model using Relational or Dimensional Modeling practices, and high-volume ETL/ELT design. Expert in SQL - development experience in at least one scripting language (Python etc.), adept in tracing and resolving data integrity issues. Data Visualization using Power BI or Tableau Performance tuning of data pipelines and DB Objects to deliver optimal performance. Excellent data analysis skills using SQL and experience in incident management techniques. Data protection/compliance standards like GDPR, CCPA, HIPAA Experience working in Financial Industry is a plus Leadership Qualities (For People Leaders) Communication: Clearly conveys ideas and listens actively. Inspiration: Motivates and encourages the team to achieve their best. Influence: Extensive stakeholder management experience and ability to influence people Driving strategic and technical initiatives Relationships & Collaboration Reports to: Associate Director - Data Engineering Partners: Senior leaders and cross-functional teams Leads: A team of Data Engineering associates Accessibility Needs We are committed to providing an inclusive and accessible hiring process. If you require accommodations at any stage (e.g. application, interviews, onboarding) please let us know, and we will work with you to ensure a seamless experience.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Title: Python Data Engineer – AWS Job location: Remote Job Type: Full-time Client: Direct Description We are seeking a highly skilled Python Data Engineer with deep expertise in AWS-based data solutions. This role is responsible for designing, building, and optimizing large-scale data pipelines and frameworks that power analytics and machine learning workloads. You'll lead the modernization of legacy systems by migrating workloads from platforms like Teradata to AWS-native big data environments such as EMR, Glue, and Redshift. Strong emphasis is placed on reusability, automation, observability, and performance optimization. Key Responsibilities Migration & Modernization: Build reusable accelerators and frameworks to migrate data from legacy platforms (e.g., Teradata) to AWS-native architectures such as EMR and Redshift. Data Pipeline Development: Design and implement robust ETL/ELT pipelines using Python, PySpark, and SQL on AWS big data platforms. Code Quality & Testing: Drive development standards with test-driven development, unit testing, and automated validation of data pipelines. Monitoring & Observability: Build operational tooling and dashboards for pipeline observability, including metrics tracking (latency, throughput, data quality, cost). Cloud-Native Engineering: Architect scalable, secure data workflows using AWS services like Glue, Lambda, Step Functions, S3, and Athena. Collaboration: Partner with internal product teams, data scientists, and external stakeholders to clarify requirements and drive solutions aligned with business goals. Architecture & Integration: Work with enterprise architects to evolve data architecture while integrating AWS systems with on-premise or hybrid environments securely. ML Support & Experimentation: Enable data scientists to operationalize machine learning models by providing clean, well-governed datasets at scale. Documentation & Enablement: Document solutions thoroughly and provide technical guidance and knowledge sharing to internal engineering teams. Qualifications Experience: 4+ years in technology roles, with experience in data engineering, software development, and distributed systems. Programming: Expert in Python and PySpark (Scala is a plus) Deep understanding of software engineering best practices AWS Expertise: 4+ years of hands-on experience in AWS data ecosystem Proficient in AWS Glue, S3, Redshift, EMR, Athena, Step Functions, Lambda Experience with AWS Lake Formation and data cataloging tools is a plus AWS Data Analytics or Solutions Architect certification is a strong plus Big Data & MPP Systems: Strong grasp of distributed data processing Experience with MPP data warehouses like Redshift, Snowflake, or Databricks on AWS DevOps & Tooling: Experience with version control (GitHub/CodeCommit) and CI/CD tools (CodePipeline, Jenkins, etc.) Familiarity with containerization and deployment in Kubernetes or ECS Data Quality & Governance: Experience with data profiling, data lineage, and tools Understanding of metadata management and data security best practices Bonus: Experience supporting machine learning or data science workflows Familiarity with BI tools such as QuickSight, PowerBI, or Tableau Show more Show less

Posted 3 weeks ago

Apply

1.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Our company: At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise. What You’ll Do: The Accounting Specialist will Ensure the monthly accounts for your assigned country are submitted accurately and on time Ensure all adjustments are submitted and processed in a timely manner Ensure all subledgers are interfaced correctly with the General Ledger Researching and resolving accounting issues for your assigned country Perform account reconciliation for your assigned country Who You’ll Work With: As an experienced accountant within the Indian Finance Centre, you will play a key role in providing accounting services to our Teradata organisations in APAC or America (which may change to other regions as per business requirement). Working with the Team Leader and other team members, you will be responsible for the delivery of accurate and timely accounting information. This will often involve close collaboration with other specialized departments. Minimum Requirements: Bachelor Degree, Accounting, Finance, or other related Business discipline. You have 1 to 4 years’ experience within a large multinational organization, preferably within a Shared Services Centre You have solid familiarity with Microsoft Office products and Outlook Experience in ERPs like SAP/Oracle & HFM would be preferred You are fluent in English You are energetic, results oriented with a “can do” attitude What You’ll Bring: Ability to collaborate and partner with other team members and BUs to provide an overall superior level of service Ability to “take the lead” in researching and resolving issues, as needed Ability to take ownership of special projects and effectively deliver positive results Technical and comprehensive knowledge of Finance & Accounting systems and processing Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Are you insatiably curious, deeply passionate about the realm of databases and analytics, and ready to tackle complex challenges in a dynamic environment in the era of AI? If so, we invite you to join our team as a Cloud & AI Solution Engineer in Innovative Data Platform for commercial customers at Microsoft. Here, you'll be at the forefront of innovation, working on cutting-edge projects that leverage the latest technologies to drive meaningful impact. Join us and be part of a team that thrives on collaboration, creativity, and continuous learning. Databases & Analytics is a growth opportunity for Microsoft Azure, as well as its partners and customers. It includes a rich portfolio of products including IaaS and PaaS services on the Azure Platform in the age of AI. These technologies empower customers to build, deploy, and manage database and analytics applications in a cloud-native way. As an Innovative Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Drive technical sales with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Qualifications 5+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and competitors (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Are you insatiably curious, deeply passionate about the realm of databases and analytics, and ready to tackle complex challenges in a dynamic environment in the era of AI? If so, we invite you to join our team as a Cloud & AI Solution Engineer in Innovative Data Platform for commercial customers at Microsoft. Here, you'll be at the forefront of innovation, working on cutting-edge projects that leverage the latest technologies to drive meaningful impact. Join us and be part of a team that thrives on collaboration, creativity, and continuous learning. Databases & Analytics is a growth opportunity for Microsoft Azure, as well as its partners and customers. It includes a rich portfolio of products including IaaS and PaaS services on the Azure Platform in the age of AI. These technologies empower customers to build, deploy, and manage database and analytics applications in a cloud-native way. As an Innovative Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Drive technical sales with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Qualifications 5+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and competitors (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About This Role Wells Fargo is seeking a Financial Accounting Associate. In This Role, You Will Provide support for financial accounting related matters for lines of business and control functions Review basic activities associated with maintaining ledger accounts, financial statements and regulatory reports Gather financial data for financial and regulatory reports Review data from the general ledger, unit reports and various financial systems to ensure accuracy Receive direction from managers and exercise independent judgment while developing understanding of financial control functions in accordance with the company's internal control policies Collaborate and consult with peers, colleagues and managers to resolve issues and achieve goals Required Qualifications: 6+ months of Finance, Accounting, Analytics, Financial Reporting, Accounting Reporting or Risk Reporting experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Qualified CA/CFA/ICWA/MBA with experience in one or a combination of the following: finance, accounting, analytics, financial reporting, accounting reporting or risk reporting Financial reporting and analysis experience. Advanced problem solving and troubleshooting capabilities. Advanced SQL & Alteryx developer working experience. Knowledge of RDBMS (Relational Database Management System), like Teradata, Oracle, MS SQL server. Experience in Data Management life cycle, including data analysis, data profiling, data acquisition and data quality principles. Knowledge and understanding of data management concepts, process, and tools. Advanced Microsoft Office (Word, Excel, Outlook, and Access) skills Ability to create reporting from databases and data-marts. Build Various dashboards in Power BI with SQL as the back-end data. Support data migrations, system transitions, data-mapping, data lineage, data reconciliation and documentation in alignment to policy and governance Consult with stakeholders to understand their business, automation, turn them into viable solutions which improve their ability to support their clients globally. BI: experience with creating visualizations. Dashboarding experience involving multiple views that all respond to navigation/filter/etc. Ability to publish that can be reused for across dashboards/workbooks and used for self-service by other analysts working on the same domain (and/or, to reuse cube created by others where expedient). SQL: access, combine, calculate. Create temp tables. Structure data for common BI needs such as drilling or side-by-side comparison Documentation skills. Requirements, query documentation, testing Job Expectations: Review and aggregate data for analysis & reporting. Analysis will include period-over-period variances, trends, aging, capital calculations and reserve adjustments. This team will also support the preparation and review of management and regulatory reports and data submissions. help drive efficiency and automation of these processes. This team will also build a center of excellence capability responsible for partnering across the Controller teams to ensure data analytic needs are met, including automation of processes for enhanced efficiency and effectiveness. Access key systems and provide system table maintenance. also handle a variety of research and ad hoc requests. Knowledge of USA regulatory agencies (ie SEC, OCC, FED) and related reporting. Analyze past results, perform Variance Analysis, identify trends, and make recommendations for improvements. Knowledge of banking and lending data and reporting. Strong analytical skills with high attention to detail and accuracy. Ability to execute in a fast paced, high demand, environment while balancing multiple priorities. Ability to exercise independent judgment and creative problem solving techniques. Ability to prepare presentations, management reporting, and statistical analysis. Ability to be flexible and adjust plans quickly to meet changing business needs. Experience in corporate loans, capital markets and investment banking preferred. Knowledge in Basel Regulatory rules Posting End Date: 24 May 2025 Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants With Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment And Hiring Requirements Third-Party recordings are prohibited unless authorized by Wells Fargo. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process. Reference Number R-407381 Show more Show less

Posted 3 weeks ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Job Description: Strong Informatica Support Engineer with a solid background in ETL support, Unix shell scripting, and PL/SQL (Oracle/Database). The ideal candidate must have experience working in an ITSM-driven production support environment and be willing to work in a 24x7 rotational shift model. The role involves troubleshooting Informatica workflows, debugging data issues, handling batch failures, and optimizing ETL performance. The candidate should be proactive, analytical, and capable of handling critical production incidents effectively. Key Responsibilities: ETL / Informatica Support Provide L2/L3 support for Informatica PowerCenter, handling job failures, debugging, and performance tuning. Monitor, troubleshoot, and resolve batch job failures , ensuring timely data processing and availability. Work on incident, problem, and change management processes in an ITSM-driven environment. Debug Informatica workflows, mappings, and sessions using logs, error messages, and data flow analysis. Optimize Informatica jobs for performance improvement and ensure SLA adherence. Support real-time and batch data processing across enterprise data warehouses. Unix & Shell Scripting Analyze and modify Unix shell scripts used in ETL workflows. Manage and automate batch job execution using shell scripting. Perform log analysis and job monitoring in a Unix/Linux environment. PL/SQL & Oracle Database Debug SQL queries, stored procedures, and database jobs running on Oracle. Perform data validation, reconciliation, and query optimization . Collaborate with database teams for indexing, partitioning, and query tuning. Handle data discrepancies, missing records, and corruption issues efficiently. ITSM & Production Support Work in an ITIL/ITSM-based support model handling incidents, problems, and changes. Participate in root cause analysis (RCA) and provide permanent fixes for recurring issues. Ensure timely resolution of tickets in accordance with SLAs. Create and maintain runbooks, SOPs, and knowledge base articles for L1/L2 teams. Shift & On-Call Support Work in a 24x7 rotational shift environment, ensuring uninterrupted support. Handle critical P1/P2 incidents and escalate to appropriate teams as required. Provide on-call support as per the roster, including weekends and holidays when required. Offshore: All are on rotational basis: M-F rotation among two shifts 9 hours each between (6:30 AM IST- 3:30 PM IST or 12:30 end 9:30 PM IST: Work from office India) Sat-Sun & US Eastern Holiday : rotation among 3 shifts (6:30 AM IST- 3:30 PM IST or 12:30– end 9:30 PM IST or 9:15 PM IST-6:30 AM IST): Work from office India

Posted 3 weeks ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

Duration: 12Months Job Type: Contract Work Type: Onsite Job Description : Analyzes business requirements/processes and system integration points to determine appropriate technology solutions. Designs, codes, tests and documents applications based on system and user requirements. Requirements: 2-4 years of relevant IT experience in Data-Warehousing Technologies with excellent communication and Analytical skills Should possess the below skillset experience Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should know Basic ~ Medium UNIX Commands Knowledge on Hadoop- HDFS, Hive, PIG and YARN Knowledge on ingestion tool - Stream sets Good to have knowledge on Spark and Kafka Exposure in scheduling tools like Control-M Excellent analytical and problem-solving skills is a must have Excellent communication skills (oral and written) Must be experienced in diverse industry and tools and data warehousing technologies. Responsibilities: Prepares flow charts and systems diagrams to assist in problem analysis. Responsible for preparing design documentation. Designs, codes, tests and debugs software according to the clients standards, policies and procedures. Codes, tests and documents programs according to system standards. Prepares test data for unit, string and parallel testing. Analyzes business needs and creates software solutions. Evaluates and recommends software and hardware solutions to meet user needs. Interacts with business users and I/T to define current and future application requirements. Executes schedules, costs and documentation to ensure project comes to successful conclusion. Initiates corrective action to stay on project schedules. May assist in orienting, training, assigning and checking the work of lower-level employees. Leads small to moderate budget projects. Knowledge and Skills: Possesses and applies a broad knowledge of application programming processes and procedures to the completion of complex assignments. Competent to analyze diverse and complex problems. Possesses and applies broad knowledge of principles of applications programming. Competent to work in most phases of applications programming. Beginning to lead small projects or starting to offer programming solutions at an advanced level. Knowledge includes advanced work on standard applications programs including coding, testing and debugging. Advanced ability to effectively troubleshoot program errors. Advanced understanding of how technology decisions relate to business needs. Mandatory Skills: Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should have good knowledge on UNIX Commands Experience: Total Exp 2-3 Years Rel Exp 2 years

Posted 3 weeks ago

Apply

5.0 - 7.0 years

8 - 10 Lacs

Tamil Nadu

Work from Office

Naukri logo

Duration: 12Months PRODUCT AND REQUIREMENTS MANAGEMENT: Participate in development of use cases, requirements, test cases, epics, user stories, and tasks in Jira. Ensures that software standards are implemented in adherence to the clientIT's security and controls policy. Simplifies complex ideas and conveys them clearly in both oral and written communication. DESIGN/DEVELOP/TEST/DEPLOY: Work with the Business Customer, Product Owner, Architects, Product Designer and other Software Engineers on application design, development and deployment of applications and data products. Participate in daily stand-up meetings. OPERATIONS: Generate Metrics, Perform User Access Authorization, Perform Password Maintenance, Perform SharePoint Administration. Participate in daily operations meetings. INCIDENT, PROBLEM AND CHANGE/SERVICE REQUESTS: Participate and/or lead incident, problem, change and service request related activities. Includes root cause analysis (RCA). Includes proactive problem management defect prevention activities. Skills Required: Java experience Java J2ee Development environments IntelliJ/Eclipse Web front end development including JavaScript, Angular, React, etc. Develop REST based microservices using Spring, SpringBoot, SpringCloud, SpringListener, SpringMVC, JavaScript, HTML, XML, JUNITS Cloud, container image development and container orchestration experience Experience working with cloud environments, including GCP, Azure, OpenShift, Docker/Kubernetes Data Manipulation experience SQL data manipulation (Relational and NoSQL) Products (relational) including PostGres SQL, SQL Server, Teradata, Products (big data) including GCP Products (streaming) including Kafka and MQTT Build tools experience Skills Preferred: GitHub Build Tools like Tekton, Jenkins, GRADLE Implement and optimize cloud services and tools (e.g. Terraform, BigQuery, GCP) Knowledge in Source Code Management systems such as GIT Proven experience understanding, practicing, and advocating for software engineering disciplines from Clean Code, Software Artisanship, and Lean including: Paired Mobbing programming Test-first/Test Driven Development (TDD) Evolutionary design Minimum Viable Product Experience Required: 5-7 Years Education Required: BE

Posted 3 weeks ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Hyderabad

Hybrid

Naukri logo

Position: Senior Software Engineer Location: Hyderabad Duration: 12Months Job Type: Contract Work Type: Onsite Job Description : Analyzes business requirements/processes and system integration points to determine appropriate technology solutions. Designs, codes, tests and documents applications based on system and user requirements. Requirements: 6-8 years of relevant IT experience in Data-Warehousing Technologies with excellent communication and Analytical skills Should possess the below skillset experience Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should know Basic ~ Medium UNIX Commands Knowledge on Hadoop- HDFS, Hive, PIG and YARN Knowledge on ingestion tool - Stream sets Good to have knowledge on Spark and Kafka Exposure in scheduling tools like Control-M Excellent analytical and problem-solving skills is a must have Excellent communication skills (oral and written) Must be experienced in diverse industry and tools and data warehousing technologies. Responsibilities: Prepares flow charts and systems diagrams to assist in problem analysis. Responsible for preparing design documentation. Designs, codes, tests and debugs software according to the clients standards, policies and procedures. Codes, tests and documents programs according to system standards. Prepares test data for unit, string and parallel testing. Analyzes business needs and creates software solutions. Evaluates and recommends software and hardware solutions to meet user needs. Interacts with business users and I/T to define current and future application requirements. Executes schedules, costs and documentation to ensure project comes to successful conclusion. Initiates corrective action to stay on project schedules. May assist in orienting, training, assigning and checking the work of lower-level employees. Leads small to moderate budget projects. Knowledge and Skills : Possesses and applies a broad knowledge of application programming processes and procedures to the completion of complex assignments. Competent to analyze diverse and complex problems. Possesses and applies broad knowledge of principles of applications programming. Competent to work in most phases of applications programming. Beginning to lead small projects or starting to offer programming solutions at an advanced level. Knowledge includes advanced work on standard applications programs including coding, testing and debugging. Advanced ability to effectively troubleshoot program errors. Advanced understanding of how technology decisions relate to business needs. Mandatory Skills: Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should have good knowledge on UNIX Command

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Location: Hyderabad / Bangalore / Pune Experience: 7-16 Years Work Mode: Hybrid Mandatory Skills: ERstudio, Data Warehouse, Data Modelling, Databricks, ETL, PostgreSQL, MySQL, Oracle, NoSQL, Hadoop, Spark, Dimensional Modelling ,OLAP, OLTP, Erwin, Data Architect and Supplychain (preferred) Job Description We are looking for a talented and experienced Senior Data Modeler to join our growing team. As a Senior Data Modeler, you will be responsible for designing, implementing, and maintaining data models to enhance data quality, performance, and scalability. You will collaborate with cross-functional teams including data analysts, architects, and business stakeholders to ensure that the data models align with business requirements and drive efficient data management. Key Responsibilities Design, implement, and maintain data models that support business requirements, ensuring high data quality, performance, and scalability. Collaborate with data analysts, data architects, and business stakeholders to align data models with business needs. Leverage expertise in Azure, Databricks, and data warehousing to create and manage data solutions. Manage and optimize relational and NoSQL databases such as Teradata, SQL Server, Oracle, MySQL, MongoDB, and Cassandra. Contribute to and enhance the ETL processes and data integration pipelines to ensure smooth data flows. Apply data modeling principles and techniques, such as ERD and UML, to design and implement effective data models. Stay up-to-date with industry trends and emerging technologies, such as big data technologies like Hadoop and Spark. Develop and maintain data models using data modeling tools such as ER/Studio and Hackolade. Drive the adoption of best practices and standards for data modeling within the organization. Skills And Qualifications Minimum of 6+ years of experience in data modeling, with a proven track record of implementing scalable and efficient data models. Expertise in Azure and Databricks for building data solutions. Proficiency in ER/Studio, Hackolade, and other data modeling tools. Strong understanding of data modeling principles and techniques (e.g., ERD, UML). Experience with relational databases (e.g., Teradata, SQL Server, Oracle, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Solid understanding of data warehousing, ETL processes, and data integration. Familiarity with big data technologies such as Hadoop and Spark is an advantage. Industry Knowledge: A background in supply chain is preferred but not mandatory. Excellent analytical and problem-solving skills. Strong communication skills, with the ability to interact with both technical and non-technical stakeholders. Ability to work well in a collaborative, fast-paced environment. Education B.Tech in any branch or specialization Skills: data visualization,oltp,models,databricks,spark,data modeller,supplychain,oracle,skills,databases,hadoop,dimensional modelling,erstudio,nosql,data warehouse,data models,data modeling,modeling,data architect,er studio,data modelling,data,etl,erwin,mysql,olap,postgresql Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Location: Hyderabad / Bangalore / Pune Experience: 7-16 Years Work Mode: Hybrid Mandatory Skills: ERstudio, Data Warehouse, Data Modelling, Databricks, ETL, PostgreSQL, MySQL, Oracle, NoSQL, Hadoop, Spark, Dimensional Modelling ,OLAP, OLTP, Erwin, Data Architect and Supplychain (preferred) Job Description We are looking for a talented and experienced Senior Data Modeler to join our growing team. As a Senior Data Modeler, you will be responsible for designing, implementing, and maintaining data models to enhance data quality, performance, and scalability. You will collaborate with cross-functional teams including data analysts, architects, and business stakeholders to ensure that the data models align with business requirements and drive efficient data management. Key Responsibilities Design, implement, and maintain data models that support business requirements, ensuring high data quality, performance, and scalability. Collaborate with data analysts, data architects, and business stakeholders to align data models with business needs. Leverage expertise in Azure, Databricks, and data warehousing to create and manage data solutions. Manage and optimize relational and NoSQL databases such as Teradata, SQL Server, Oracle, MySQL, MongoDB, and Cassandra. Contribute to and enhance the ETL processes and data integration pipelines to ensure smooth data flows. Apply data modeling principles and techniques, such as ERD and UML, to design and implement effective data models. Stay up-to-date with industry trends and emerging technologies, such as big data technologies like Hadoop and Spark. Develop and maintain data models using data modeling tools such as ER/Studio and Hackolade. Drive the adoption of best practices and standards for data modeling within the organization. Skills And Qualifications Minimum of 6+ years of experience in data modeling, with a proven track record of implementing scalable and efficient data models. Expertise in Azure and Databricks for building data solutions. Proficiency in ER/Studio, Hackolade, and other data modeling tools. Strong understanding of data modeling principles and techniques (e.g., ERD, UML). Experience with relational databases (e.g., Teradata, SQL Server, Oracle, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Solid understanding of data warehousing, ETL processes, and data integration. Familiarity with big data technologies such as Hadoop and Spark is an advantage. Industry Knowledge: A background in supply chain is preferred but not mandatory. Excellent analytical and problem-solving skills. Strong communication skills, with the ability to interact with both technical and non-technical stakeholders. Ability to work well in a collaborative, fast-paced environment. Education B.Tech in any branch or specialization Skills: data visualization,oltp,models,databricks,spark,data modeller,supplychain,oracle,skills,databases,hadoop,dimensional modelling,erstudio,nosql,data warehouse,data models,data modeling,modeling,data architect,er studio,data modelling,data,etl,erwin,mysql,olap,postgresql Show more Show less

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Lead Data Engineering Manager – GCP Cloud Migration Location: [Remote] Experience: 12–15 Years (5+ Years Leading Data Teams, 8+ Years in Data Engineering) Employment Type: Full-Time About the Role: We are seeking an experienced Lead Data Engineering Manager to drive the end-to-end migration of enterprise data platforms, ETL pipelines, and data warehouses to the cloud — with a focus on Google Cloud Platform (GCP) . This role will lead high-performing engineering teams and collaborate with cross-functional stakeholders to architect and execute scalable, secure, and modern data solutions using BigQuery, Dataform, Dataplex , and other cloud-native tools. A background in premium consulting or strategic technology advisory is highly preferred, as this role will engage with executive stakeholders and contribute to data transformation strategies at the enterprise level. Key Responsibilities: Lead and mentor Data Engineering teams across design, development, and deployment of modern cloud data architectures. Drive cloud migration initiatives including re-platforming legacy ETL workflows and on-prem DWHs to GCP-based solutions . Architect and implement scalable data pipelines using BigQuery, Dataform , and orchestration tools. Ensure robust data governance and cataloging practices leveraging Dataplex and other GCP services. Collaborate with data analysts, data scientists, and business stakeholders to enable advanced analytics and ML capabilities. Establish and enforce engineering best practices, CI/CD pipelines, and monitoring strategies. Provide technical leadership, project planning, and resource management to deliver projects on time and within scope. Represent the data engineering function in client or leadership meetings, especially in a consulting or multi-client context. Required Skills & Qualifications: 12–15 years of total experience, with 8+ years in data engineering and 5+ years in team leadership roles. Proven expertise in cloud-based data platforms, especially GCP (BigQuery, Dataflow, Dataform, Dataplex, Cloud Composer, Pub/Sub) . Strong knowledge of modern ETL/ELT practices, data modeling, and pipeline orchestration. Experience with data warehouse modernization and migration from platforms like Teradata, Oracle, or Hadoop to GCP. Familiarity with data governance , metadata management , and data cataloging . Background in consulting or strategic data advisory with Fortune 500 clients preferred. Hands-on skills in SQL, Python, and cloud infrastructure-as-code (e.g., Terraform). Strong communication, stakeholder engagement, and leadership presence. Preferred Qualifications: GCP Data Engineer or Architect Certification. Experience with agile methodologies and DevOps practices. Prior work with multi-cloud or hybrid environments is a plus. Experience in regulated industries (finance, healthcare, etc.) is advantageous. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Lowe’s Lowe's Companies, Inc. (NYSE: LOW) is a FORTUNE® 50 home improvement company serving approximately 17 million customer transactions a week in the U.S. With total fiscal year 2022 sales of over $97 billion, approximately $92 billion of sales were generated in the U.S., where Lowe's operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe's supports the communities it serves through programs focused on creating safe, affordable housing and helping to develop the next generation of skilled trade experts. About Lowe’s India At Lowe's India, we are the enablers who help create an engaging customer experience for our $97 billion home improvement business at Lowe's. Our 4000+ associates work across technology, analytics, business operations, finance & accounting, product management, and shared services. We leverage new technologies and find innovative methods to ensure that Lowe's has a competitive edge in the market. About The Team The pricing Analytics team supports pricing managers and merchants in defining and optimizing the pricing strategies for various product categories across the channels .The team leverages advance analytics to forecast/measure the impact of pricing actions , develop strategic price zones , recommend price changes and identify sales/margin opportunities to achieve company targets . I. Job Summary The primary purpose of this role is development and maintenance of descriptive and predictive analytics models and tools in support of Lowe's pricing strategy. Working closely with the Pricing Analytics & Insights team, the analyst will assist in translating pricing goals and objectives to the underlying data and analytics requirements. Using open source and commercial software tools for data science, the analyst will gather and wrangle data to deliver data-driven insights, trends, and anomalies to develop the most appropriate statistical and machine learning techniques to answer relevant questions and provide retail recommendations. The analyst will actively collaborate and partner with other technical and non-technical teams to incorporate feedback at various stages of development to drive continuous improvement with the goal to be best-in-class position in the pricing space. Core Responsibilities II. Roles & Responsibilities: The primary purpose of this role is development and maintenance of descriptive and predictive analytics models and tools in support of Lowe's pricing strategy Support translation of business goals and objectives into analytics requirements. Design and develop processes to gather, explore, structure, enrich, and clean large datasets from various internal and external sources. Perform data validation, outlier detection, and root cause analysis to prepare inputs for statistical and machine learning models. Research, design, and develop applicable statistical and machine learning approaches for specific business problems. Assess the quality of model output to identify meaningless results and investigate root cause through detailed analysis of data inputs and code. Develop scalable retail test processes and frameworks for design of experiment, A/B testing, baseline development, assessment, and recommendations. Collaborate and partner with Pricing Strategy & Execution, Analytics COE, Merchandising, IT, and others to scope out, prioritize, and develop innovative capabilities. Convert mundane manual process in to automated ones III. Years Of Experience 3-6 years of relevant experience IV. Education Qualification & Certifications (optional) Required Minimum Qualifications Bachelor’s or Masters in Engineering/business analytics/Data Science/Statistics/economics/math V. Skill Set Required Primary Skills (must have) 3+ Years of experience in advance quantitative analysis and statistical modeling Ability to perform various analytical concepts like Regression, Sampling techniques, hypothesis, Segmentation, Time Series Analysis, Multivariate Statistical Analysis, Predictive Modelling 3+ years’ experience in corporate Data Science, Analytics, Pricing & Promotions, Merchandising, or Revenue Management 3+ years’ experience working with common analytics and data science software and technologies such as SQL, Python, R, or SAS 3+ years’ experience working with Enterprise level databases (e.g., Hadoop, Teradata, Oracle, DB2) 3+ years’ experience using enterprise-grade data visualization tools (e.g., Power BI , Tableau) Secondary Skills (desired) Technical expertise in Alteryx, Knime, GCP,Azure Lowe's is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits. Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Location: Hyderabad / Bangalore / Pune Experience: 7-16 Years Work Mode: Hybrid Mandatory Skills: ERstudio, Data Warehouse, Data Modelling, Databricks, ETL, PostgreSQL, MySQL, Oracle, NoSQL, Hadoop, Spark, Dimensional Modelling ,OLAP, OLTP, Erwin, Data Architect and Supplychain (preferred) Job Description We are looking for a talented and experienced Senior Data Modeler to join our growing team. As a Senior Data Modeler, you will be responsible for designing, implementing, and maintaining data models to enhance data quality, performance, and scalability. You will collaborate with cross-functional teams including data analysts, architects, and business stakeholders to ensure that the data models align with business requirements and drive efficient data management. Key Responsibilities Design, implement, and maintain data models that support business requirements, ensuring high data quality, performance, and scalability. Collaborate with data analysts, data architects, and business stakeholders to align data models with business needs. Leverage expertise in Azure, Databricks, and data warehousing to create and manage data solutions. Manage and optimize relational and NoSQL databases such as Teradata, SQL Server, Oracle, MySQL, MongoDB, and Cassandra. Contribute to and enhance the ETL processes and data integration pipelines to ensure smooth data flows. Apply data modeling principles and techniques, such as ERD and UML, to design and implement effective data models. Stay up-to-date with industry trends and emerging technologies, such as big data technologies like Hadoop and Spark. Develop and maintain data models using data modeling tools such as ER/Studio and Hackolade. Drive the adoption of best practices and standards for data modeling within the organization. Skills And Qualifications Minimum of 6+ years of experience in data modeling, with a proven track record of implementing scalable and efficient data models. Expertise in Azure and Databricks for building data solutions. Proficiency in ER/Studio, Hackolade, and other data modeling tools. Strong understanding of data modeling principles and techniques (e.g., ERD, UML). Experience with relational databases (e.g., Teradata, SQL Server, Oracle, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Solid understanding of data warehousing, ETL processes, and data integration. Familiarity with big data technologies such as Hadoop and Spark is an advantage. Industry Knowledge: A background in supply chain is preferred but not mandatory. Excellent analytical and problem-solving skills. Strong communication skills, with the ability to interact with both technical and non-technical stakeholders. Ability to work well in a collaborative, fast-paced environment. Education B.Tech in any branch or specialization Skills: data visualization,oltp,models,databricks,spark,data modeller,supplychain,oracle,skills,databases,hadoop,dimensional modelling,erstudio,nosql,data warehouse,data models,data modeling,modeling,data architect,er studio,data modelling,data,etl,erwin,mysql,olap,postgresql Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Location: Hyderabad / Bangalore / Pune Experience: 7-16 Years Work Mode: Hybrid Mandatory Skills: ERstudio, Data Warehouse, Data Modelling, Databricks, ETL, PostgreSQL, MySQL, Oracle, NoSQL, Hadoop, Spark, Dimensional Modelling ,OLAP, OLTP, Erwin, Data Architect and Supplychain (preferred) Job Description We are looking for a talented and experienced Senior Data Modeler to join our growing team. As a Senior Data Modeler, you will be responsible for designing, implementing, and maintaining data models to enhance data quality, performance, and scalability. You will collaborate with cross-functional teams including data analysts, architects, and business stakeholders to ensure that the data models align with business requirements and drive efficient data management. Key Responsibilities Design, implement, and maintain data models that support business requirements, ensuring high data quality, performance, and scalability. Collaborate with data analysts, data architects, and business stakeholders to align data models with business needs. Leverage expertise in Azure, Databricks, and data warehousing to create and manage data solutions. Manage and optimize relational and NoSQL databases such as Teradata, SQL Server, Oracle, MySQL, MongoDB, and Cassandra. Contribute to and enhance the ETL processes and data integration pipelines to ensure smooth data flows. Apply data modeling principles and techniques, such as ERD and UML, to design and implement effective data models. Stay up-to-date with industry trends and emerging technologies, such as big data technologies like Hadoop and Spark. Develop and maintain data models using data modeling tools such as ER/Studio and Hackolade. Drive the adoption of best practices and standards for data modeling within the organization. Skills And Qualifications Minimum of 6+ years of experience in data modeling, with a proven track record of implementing scalable and efficient data models. Expertise in Azure and Databricks for building data solutions. Proficiency in ER/Studio, Hackolade, and other data modeling tools. Strong understanding of data modeling principles and techniques (e.g., ERD, UML). Experience with relational databases (e.g., Teradata, SQL Server, Oracle, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Solid understanding of data warehousing, ETL processes, and data integration. Familiarity with big data technologies such as Hadoop and Spark is an advantage. Industry Knowledge: A background in supply chain is preferred but not mandatory. Excellent analytical and problem-solving skills. Strong communication skills, with the ability to interact with both technical and non-technical stakeholders. Ability to work well in a collaborative, fast-paced environment. Education B.Tech in any branch or specialization Skills: data visualization,oltp,models,databricks,spark,data modeller,supplychain,oracle,skills,databases,hadoop,dimensional modelling,erstudio,nosql,data warehouse,data models,data modeling,modeling,data architect,er studio,data modelling,data,etl,erwin,mysql,olap,postgresql Show more Show less

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Location: Hyderabad / Bangalore / Pune Experience: 7-16 Years Work Mode: Hybrid Mandatory Skills: ERstudio, Data Warehouse, Data Modelling, Databricks, ETL, PostgreSQL, MySQL, Oracle, NoSQL, Hadoop, Spark, Dimensional Modelling ,OLAP, OLTP, Erwin, Data Architect and Supplychain (preferred) Job Description We are looking for a talented and experienced Senior Data Modeler to join our growing team. As a Senior Data Modeler, you will be responsible for designing, implementing, and maintaining data models to enhance data quality, performance, and scalability. You will collaborate with cross-functional teams including data analysts, architects, and business stakeholders to ensure that the data models align with business requirements and drive efficient data management. Key Responsibilities Design, implement, and maintain data models that support business requirements, ensuring high data quality, performance, and scalability. Collaborate with data analysts, data architects, and business stakeholders to align data models with business needs. Leverage expertise in Azure, Databricks, and data warehousing to create and manage data solutions. Manage and optimize relational and NoSQL databases such as Teradata, SQL Server, Oracle, MySQL, MongoDB, and Cassandra. Contribute to and enhance the ETL processes and data integration pipelines to ensure smooth data flows. Apply data modeling principles and techniques, such as ERD and UML, to design and implement effective data models. Stay up-to-date with industry trends and emerging technologies, such as big data technologies like Hadoop and Spark. Develop and maintain data models using data modeling tools such as ER/Studio and Hackolade. Drive the adoption of best practices and standards for data modeling within the organization. Skills And Qualifications Minimum of 6+ years of experience in data modeling, with a proven track record of implementing scalable and efficient data models. Expertise in Azure and Databricks for building data solutions. Proficiency in ER/Studio, Hackolade, and other data modeling tools. Strong understanding of data modeling principles and techniques (e.g., ERD, UML). Experience with relational databases (e.g., Teradata, SQL Server, Oracle, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Solid understanding of data warehousing, ETL processes, and data integration. Familiarity with big data technologies such as Hadoop and Spark is an advantage. Industry Knowledge: A background in supply chain is preferred but not mandatory. Excellent analytical and problem-solving skills. Strong communication skills, with the ability to interact with both technical and non-technical stakeholders. Ability to work well in a collaborative, fast-paced environment. Education B.Tech in any branch or specialization Skills: data visualization,oltp,models,databricks,spark,data modeller,supplychain,oracle,skills,databases,hadoop,dimensional modelling,erstudio,nosql,data warehouse,data models,data modeling,modeling,data architect,er studio,data modelling,data,etl,erwin,mysql,olap,postgresql Show more Show less

Posted 3 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Back Who Are We At BCE Global Tech, immerse yourself in exciting projects that are shaping the future of both consumer and enterprise telecommunications. This involves building innovative mobile apps to enhance user experiences and enable seamless connectivity on-the-go. Thrive in diverse roles like Full Stack Developer, Backend Developer, UI/UX Designer, DevOps Engineer, Cloud Engineer, Data Science Engineer, and Scrum Master; at a workplace that encourages you to freely share your bold and different ideas. If you are passionate about technology and eager to make a difference, we want to hear from you! Apply now to join our dynamic team in Bengaluru. The role demands strong leadership skills with a focus on risk management and the ability to deliver results-oriented outcomes. The candidate must understand business needs, maintain a customer-focused approach, and demonstrate the ability to adapt and work in a fast-paced environment, managing multiple projects and tolerating ambiguity effectively. Key Responsibilities experience in business intelligence related functions (testing, analysis, coding, etc.)Architect and provide guidance on building end-to-end systems optimized for speed and scale Proficiency in programming languages including SAS, Oracle and Teradata SQL Strong knowledge of relational databases- My sql, DB2, etc Autonomous and self-motivated with an ability to prioritize Experience working in agile Technology Skills Advanced SQL skills - preferable Teradata Working knowledge of SAS Viya- New cloud product (Good to have) Google Cloud experience includeing BIG Query- GCP prefer, we can also look for other Clouds Working knowledge of MicroStrategy Strong leadership skills with risk-management Results-oriented Understanding of business needs/input, customer focus Ability to adapt and work in a fast paced environment with good tolerance to ambiguity and manage multiple projects Required Qualifications To Be Successful In This Role Work with multiple teams to collect and document business requirements / User Stories Work within various database environments such as Teradata and SAS to design, develop and deliver data solutions, ensuring accuracy and quality Design, construct, modify, integrate, implement and test data models and database management systems Utilize advanced SQL to analyze data and perform data mining analysis. Effectively manage data mining analysis and report to various stakeholders Establish & maintain effective relationships with cross-functional product teams, colleagues and process teams, including virtual environment (i.e. conference calls, videoconference) Additional Information Job Type : Permanent Full Time Work Profile : Hybrid (Work from Office/ Remote) Years of Experience : 8-10Years Location : Bangalore What We Offer Competitive salaries and comprehensive health benefits Flexible work hours and remote work options. Professional development and training opportunities. A supportive and inclusive work environment Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Business Area: Professional Services Seniority Level: Mid-Senior level Job Description: At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world’s largest enterprises. Strategic Customer Success Manager’s (CSM’s) are charged with driving Success Plans through operating as a trusted advisor and customer advocate to drive successful outcomes for our highest profile customers. You will act as a direct liaison between Cloudera and the customer on their Cloudera journey, ensuring that all facets of the customer product adoption journey take place, planning, preparing and driving the execution of the plan in accordance with the customer agreed timeline. You will pull the customer above the line to enable them to fulfill success initiatives within their business. Acting as the customer’s trusted Cloudera advisor and advocate, CSMs manage a system of checks and balances between the company and the customer, understanding our customer’s needs, aligning the appropriate Cloudera resources (Engineering, Product Management, Support or Services). This will include assisting with critical escalation management to ensure their Cloudera interactions, business objectives and product adoption is a success. You are comfortable working across business, technical, and senior management in a customer facing role, and you are confident and articulate in communication with stakeholders. A key aspect to success in the role is persistence: forming a relationship of trust with the customer, anticipating issues, acting with agility and flexibility in the face of any situation that may arise. As a Customer Success Manager you will: Exemplify strong customer facing skills and stakeholder management Take ownership for the customer’s product adoption of Cloudera products and Success Plan Have experience operating on-site with large enterprise customers Liaise and facilitate with key internal and external stakeholders Have clear and concise communication skills Take an Ownership mentality over your customers and work Be comfortable with program, project management Demonstrate best practices and ability in managing stakeholder escalations to mutually agreeable outcomes Develop & leverage reports, dashboards, and data to summarise customer engagements and statuses, both to internal audiences as well as back to your customers. Analyse customer activity and data across the Cloudera organisation, including from Sales, Support, Professional Services, & Training to identify trends, gaps, or issues from and communicating/actioning those to stakeholders (internal and external). We’re excited about you if you have: Fluent English Language Skills, other languages a plus Experience with large scale data platforms Experience with software implementation and upgrade management Strong technology background either in a technical or business capacity Familiarity with project lifecycle management and the complexities around project delivery. Understanding of data management concepts Understanding of ITIL concepts Experience with an RDBMS (Oracle, MySQL, Teradata, etc) Understanding of networking concepts Industry vertical experience including FSI, Telecommunications, Manufacturing or Government a plus 5 or more years relevant experience. Degree level of education with high attainment or equivalent experience in a related field. What you can expect from us: Generous PTO Policy Support work life balance with Unplugged Days Flexible WFH Policy Mental & Physical Wellness programs Phone and Internet Reimbursement program Access to Continued Career Development Comprehensive Benefits and Competitive Packages Paid Volunteer Time Employee Resource Groups Cloudera is an Equal Opportunity / Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status. Show more Show less

Posted 3 weeks ago

Apply

Exploring Teradata Jobs in India

Teradata is a popular data warehousing platform that is widely used by businesses in India. As a result, there is a growing demand for skilled professionals who can work with Teradata effectively. Job seekers in India who have expertise in Teradata have a wide range of opportunities available to them across different industries.

Top Hiring Locations in India

  1. Bengaluru
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving tech industries and have a high demand for Teradata professionals.

Average Salary Range

The average salary range for Teradata professionals in India varies based on experience levels. Entry-level roles can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the field of Teradata, a typical career path may involve progressing from roles such as Junior Developer to Senior Developer, and eventually to a Tech Lead position. With experience and skill development, professionals can take on more challenging and higher-paying roles in the industry.

Related Skills

In addition to Teradata expertise, professionals in this field are often expected to have knowledge of SQL, data modeling, ETL tools, and data warehousing concepts. Strong analytical and problem-solving skills are also essential for success in Teradata roles.

Interview Questions

  • What is Teradata and how is it different from other database management systems? (basic)
  • Can you explain the difference between a join and a merge in Teradata? (medium)
  • How would you optimize a Teradata query for performance? (medium)
  • What are fallback tables in Teradata and why are they important? (advanced)
  • How do you handle duplicate records in Teradata? (basic)
  • What is the purpose of a collect statistics statement in Teradata? (medium)
  • Explain the concept of indexing in Teradata. (medium)
  • How does Teradata handle concurrency control? (advanced)
  • Can you describe the process of data distribution in Teradata? (medium)
  • What are the different types of locks in Teradata and how are they used? (advanced)
  • How would you troubleshoot performance issues in a Teradata system? (medium)
  • What is a Teradata View and how is it different from a Table? (basic)
  • How do you handle NULL values in Teradata? (basic)
  • Can you explain the difference between FastLoad and MultiLoad in Teradata? (medium)
  • What is the Teradata Parallel Transporter? (advanced)
  • How do you perform data migration in Teradata? (medium)
  • Explain the concept of fallback protection in Teradata. (advanced)
  • What are the different types of Teradata macros and how are they used? (advanced)
  • How do you monitor and manage Teradata performance? (medium)
  • What is the purpose of the Teradata QueryGrid? (advanced)
  • How do you optimize the storage of data in Teradata? (medium)
  • Can you explain the concept of Teradata indexing strategies? (advanced)
  • How do you handle data security in Teradata? (medium)
  • What are the best practices for Teradata database design? (medium)
  • How do you ensure data integrity in a Teradata system? (medium)

Closing Remark

As you prepare for interviews and explore job opportunities in Teradata, remember to showcase your skills and experience confidently. With the right preparation and determination, you can land a rewarding role in the dynamic field of Teradata in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies