Case Study: EBGP

The EBGP case study is designed to emulate a typical JNCIP-level EBGP and ISP policy configuration scenario. In the interest of 'mixing things up,' you will configure the EBGP case study using the IBGP confederation topology demonstrated in Chapter 5. The Multi-Level IS-IS topology that will support your IBGP and EBGP sessions stems from the Chapter 4 case study and is shown in Figure 6.4 so you that may reacquaint yourself with it.

click to expand
Figure 6.4: Multi-Level IS-IS topology from Chapter 4 case study

You should load and commit the saved IBGP confederation configurations produced in the 'IBGP Confederations' section of Chapter 5 to ensure that your routers will look and behave like the examples shown here. Before starting the EBGP case study, you should verify the correct operation of all routers, interfaces, IS-IS, and the OSPF router using the confirmation steps outlined in the case study at the end of Chapter 4. You may want to review the IS-IS case study requirements to refresh your memory as to the specifics of the IGP that now supports your EBGP case study configuration.

Figure 6.5 shows the IBGP confederation topology from Chapter 5 that serves as your EBGP case study starting point. Before starting this case study, confirm proper IBGP operation using the confirmation steps provided in the 'IBGP Confederation' section of Chapter 5.

click to expand
Figure 6.5: IBGP confederation topology from Chapter 5

You will want to refer to the criteria listing and the information shown in Figure 6.6, the EBGP case study topology, for the information you will need to complete this case study. It is expected that a JNCIP exam candidate will be able to complete this case study in approximately two hours, with the completed EBGP and ISP policy configuration resulting in a network with no serious connectivity or operational problems. Sample configurations from all seven routers are provided at the end of the case study for comparison with your own configurations. Because multiple solutions are usually possible, differences between the provided examples and your own configurations do not always indicate that mistakes have been made. Because you are graded on the overall functionality of your network and its conformity to the specified configuration criteria, various operational mode commands are included so that you can compare the behavior of your network to a known good example.

click to expand
Figure 6.6: EBGP case study topology

To complete this case study, your EBGP configuration must meet the following criteria.

  • Your EBGP and policy configuration must be added to the IBGP confederation example from Chapter 5.

  • Establish EBGP peering sessions according to the following criteria:

    • EBGP load balance over the two links connecting r4 to C1.

    • EBGP load balance from r3 to T1 and T2.

    • The P1 router must peer to both r1 and r2 using interface addresses.

    • The C1 router uses authentication with secret jnx.

    • The C2 router has been incorrectly set to peer with AS 65413. You must bring up the EBGP session without modifying the C2 router's configuration.

    • r4 must write to the syslog when C1 advertises more than 10 IPv4 unicast routes.

  • Policy requirements

    • Originate three advertisements to EBGP peers reflecting your 10/8 space, the OSPF router's routes, and the OSPF subnets.

    • You cannot use generated routes, but a single static route is permitted on both r1 and r2. Interface or link failures cannot disrupt P1's connectivity.

    • Prepend 64512 64512 to all routes received from P1. Ensure that transit providers do not receive these AS numbers.

    • Use communities to tag routes based on the EBGP peering point where they are learned. Ensure that routes learned from each peering point can be uniquely identified.

    • Remove all communities received from the P1 router.

    • Without using policy, make sure you do not install any 192.0.2/24 test-net prefixes from EBGP peers as active routes.

    • Accept all customer routes that have originated in customer sites to accommodate the C1-C2 EBGP peering shown in Figure 6.6.

    • Accept no routes with prefixes longer than a /26.

    • Use local preference so that customer routes are preferred over transit routes.

    • Do not accept any default routes or RFC 1918 routes from EBGP peers.

    • Send peer EBGP routes to all sites. Do not send transit provider routes to peers.

    • Customers receive all EBGP routes, and all sites receive customer EBGP routes.

    • r6 must advertise a MED to T2 based on its IGP metrics.

    • Damp transit provider routes based on prefix length according to these criteria:

    • Prefix lengths 0-8 = No damping

    • Prefix lengths 9-16 = 20-minute half-life and reuse of 1000

    • Prefix lengths 17-32 = 25-minute half-life and reuse of 1500

    • 210.0/16 or longer = No damping

    • All routers in your AS should forward through r2 to reach peer prefixes when r2 is operational.

    • Ensure that transit providers use the r6 peering link when forwarding traffic to customer destinations without setting MED on r3.

    • You cannot have any black holes or suboptimal routing.

EBGP Case Study Analysis

Each configuration requirement for the case study will now be matched to one or more valid router configurations and commands that can be used to confirm whether your network is operating within the specified case study guidelines. We begin with these criteria, as they serve to establish baseline for your BGP connectivity:

  • Establish EBGP peering sessions according to the following requirements:

    • EBGP load balance over the two links connecting r4 to C1.

    • EBGP load balance from r3 to T1 and T2.

    • The P1 router must peer to both r1 and r2 using interface addresses.

    • The C1 router uses authentication with secret jnx.

    • The C2 router has been incorrectly set to peer with AS 65413. You must bring up the EBGP session without modifying the C2 router's configuration.

    • r4 must write to the syslog when C1 advertises more than 10 IPv4 unicast routes.

r1 and r2 EBGP Peering

We begin our analysis with the EBGP peering configuration for r1. r2's P1 peering configuration is identical and is not shown here.

[edit] lab@r1# show protocols bgp group p1 type external; neighbor 10.0.5.254 {    peer-as 1492;  } 

The P1 peering session is then confirmed to be operational:

[edit] lab@r1# run show bgp summary Groups: 2 Peers: 2 Down peers: 1 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0                15         14          0          0          0           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 10.0.3.3        65000      39001         91       0       1       10:17 Active 10.0.5.254       1492         31         36       0       0       11:48 14/15/0               0/0/0

The P1 peering session is established, but the establishment of the EBGP session to P1 has caused the loss of the IBGP session to r3. Further analysis confirms this is due to the presence of a Martian route being received from P1:

[edit] lab@r1# run show route 10.0.3.3 inet.0: 29 destinations, 30 routes (29 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 0.0.0.0/4             *[BGP/170] 00:11:50, MED 0, localpref 100                          AS path: 1492 I                        > to 10.0.5.254 via fe-0/0/0.0

Your subsequent Martian filtering activities at r1 and r2 will resolve this problem.

r4 EBGP Peering

A working EBGP peering stanza for r4 is shown next. Note the presence of the multihop and local-address statements needed to support the loopback-based EBGP peering session to C1:

[edit] lab@r4# show protocols bgp group c1 type external; multihop; local-address 10.0.3.4; family inet {     unicast {         prefix-limit {             maximum 10;         }     } } peer-as 65010; neighbor 200.200.0.1 {     authentication-key "$9$n2/i9tOMWx7VY"; # SECRET-DATA } 

The prefix-limit related configuration will result in syslog entries when more than 10 uni- cast IPv4 routes are advertised by C1. Adding the teardown option would cause r4 to clear the connection when the prefix limit is exceeded, which is a behavior that is outside the requirements of this study. The static routing needed to back up the loopback peering is shown next, as are the correct interface address assignments for r4's EBGP peering to C1:

[edit] lab@r4# show routing-options static route 10.0.200.0/24 next-hop 10.0.1.102; route 192.168.40.0/24 reject; route 200.200.0.1/32 next-hop [ 172.16.0.6 172.16.0.10 ]; [edit] lab@r4# show interfaces fe-0/0/0 unit 0 {    family inet {       address 172.16.0.5/30;    }  } [edit] lab@r4# show interfaces fe-0/0/2 unit 0 {    family inet {      address 172.16.0.9/30;    }  }

Proper EBGP session establishment to C1 is confirmed:

lab@r4# run show bgp summary Groups: 3 Peers: 4 Down peers: 1 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0                19         15          0          0          0           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 200.200.0.1     65010        113        115       0       0       54:04 12/12/0               0/0/0 10.0.3.3        65000      39102      39110       0       0     1:36:14 4/4/0                 0/0/0 10.0.3.5        65002        180        183       0       1     1:23:19 0/4/0                 0/0/0 10.0.6.2        65001         38      38950       0       1     1:16:48 Active 

Considering that 12 prefixes have been advertised by C1, you can confirm the prefix-limit configuration by examining the messages file:

[edit] lab@r4# run show log messages | match prefix Jun 25 11:22:54 r4 rpd[580]: 200.200.0.1 (External AS 65010): Configured maximum   prefix-limit(10) exceeded for inet-unicast nlri: 12 

r7 EBGP Peering

The fact that C2 has been misconfigured with regard to r7's AS number, coupled with your inability to modify C2's configuration, means that you will need to use the local-as option to allow r7 to "appear" as if it belongs to a different AS. A working c2 stanza for r7 is shown next along with the required interface configuration:

[edit protocols bgp] lab@r7# show group c2 type external; local-as 65413; neighbor 172.16.0.26 {    peer-as 65020;  } [edit] lab@r7# show interfaces fe-0/3/2 unit 0 {    family inet {       address 172.16.0.25/30;    }  }

The EBGP peering session to C2 is confirmed to be operational:

[edit] lab@r7# run show bgp summary Groups: 2 Peers: 3 Down peers: 0 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0                17         16          0          0          0           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 172.16.0.26     65020         61         65       0       0       27:57 12/13/0               0/0/0 10.0.3.5        65002        276        269       0       1     2:12:29 3/3/0                 0/0/0 10.0.9.6        65002        290        295       0       0     2:24:52 1/1/0 0/0/0 

r6 EBGP Peering

r6's EBGP peering is relatively straightforward. A working t2 stanza along with the necessary interface configuration is shown next:

lab@r6# show group t2 type external; neighbor 172.16.0.22 {    peer-as 65222;  } [edit] lab@r6# show interfaces fe-0/1/2 unit 0 {    family inet {        address 172.16.0.21/30;    }  }

After the commit EBGP session establishment to T2 is confirmed:

[edit protocols bgp] lab@r6# run show bgp summary Groups: 2 Peers: 3 Down peers: 0 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0            117565     117564          0          0          0           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 172.16.0.22     65222      30723      30800       0       0           19 117548/ 117549/0      0/0/0 10.0.3.5        65002        289      30999       0       1     2:18:19 3/3/0                 0/0/0 10.0.9.7        65002        306      22767   27816       1     2:30:38 13/13/0               0/0/0

r3 EBGP Peering

r3's EBGP peering stanza requires use of the multipath option to accommodate load balancing. A working t1-t2 stanza and the necessary interface configuration statements are shown next:

[edit protocols bgp group t1-t2] lab@r3# show type external; peer-as 65222; multipath; neighbor 172.16.0.14; neighbor 172.16.0.18; [edit] lab@r3# show interfaces fe-0/0/2 unit 0 {    family inet {       address 172.16.0.13/30;    }    family iso;  } [edit] lab@r3# show interfaces fe-0/0/3 unit 0 {    family inet {      address 172.16.0.17/30;    }  }

The family iso setting on r3's fe-0/0/2 interface was called for in Chapter 5 to allow the advertisement of the 172.16.0.12/30 prefix as an IS-IS level 2 internal prefix. The presence of this route within your network does not preclude the use of a next hop self policy on r3 during subsequent policy-related configuration, but it does mean that r3 only really needs next hop self configuration for the peering session to T2.r3's EBGP sessions to T1 and T2 are now confirmed:

[edit] lab@r3# run show bgp summary Groups: 3 Peers: 5 Down peers: 1 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0            235105     235103          0          0          0           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 172.16.0.14     65222      22123      24654       0       0         6:52 117544/ 117544/0       0/0/0 172.16.0.18     65222      21138      23863       0       0         6:48 117545/ 117545/0       0/0/0 10.0.3.4        65001      64029      63704       0       1     2:55:14 11/13/0               0/0/0 10.0.3.5        65002      23228      24350       0       1     2:42:24 3/3/0                 0/0/0 10.0.6.1        65000         85      39002       0       1     2:35:48 Active 

Proper multipath operation is confirmed by seeing the same number of active routes attributed to both of the T1 and T2 peering sessions.

EBGP Import Policy and Martian Filtering

This section highlights the EBGP import policy and Martian filtering configurations for all routers in the test bed. We begin with the following case study requirement:

  • Without using policy, ensure that you do not install any 192.0.2/24 test-net prefixes from EBGP peers as active routes.

The inability to use routing policy and route filters means that you will have to configure the Martian table of each router in order to deny 192.168.0.2/24 routes as shown next:

[edit] lab@r6# set routing-options martians 192.0.2/24 orlonger 

After the commit, the Martian table for inet.0 is shown, and a 192.0.2/24 route received from T2 is confirmed as hidden. The use of orlonger for the match type is recommended to ensure that no test-net prefixes will be accepted:

[edit] lab@r6# run show route martians table inet.0 inet.0:              0.0.0.0/0 exact -- allowed              0.0.0.0/8 orlonger -- disallowed               127.0.0.0/8 orlonger -- disallowed               128.0.0.0/16 orlonger -- disallowed              191.255.0.0/16 orlonger -- disallowed               192.0.0.0/24 orlonger -- disallowed              223.255.255.0/24 orlonger -- disallowed               240.0.0.0/4 orlonger -- disallowed               192.0.2.0/24 orlonger -- disallowed [edit] lab@r6# run show route 192.0.2/24 hidden inet.0: 117589 destinations, 235127 routes (117588 active, 0 holddown, 1 hidden) + = Active Route, - = Last Active, * = Both 192.0.2.0/24        [BGP ] 00:04:01, MED 0, localpref 100                       AS path: 65222 I                     > to 172.16.0.22 via fe-0/1/2.0 

The next EBGP import policy task relates to route damping in accordance with the following requirements.

  • Damp transit provider routes based on prefix length according to these criteria:

    • Prefix lengths 0-8 = No damping

    • Prefix lengths 9-16 = 20-minute half-life and reuse of 1000

    • Prefix lengths 17-32 = 25-minute half-life and reuse of 1500

    • 210.0/16 or longer = No damping

The policy and damping profile definitions shown for r6 correctly configure the router for the specified damping requirements:

[edit] lab@r6# show policy-options | find damp policy-statement damp {      term 1 {         from {             route-filter 200.0.0.0/16 orlonger damping none;              route-filter 0.0.0.0/0 prefix-length-range /0-/8 damping none;              route-filter 0.0.0.0/0 prefix-length-range /9-/16 damping low;             route-filter 0.0.0.0/0 prefix-length-range /17-/32 damping high;         }     } } damping none {      disable; } damping high {      half-life 25;      reuse 1500; } damping low {      half-life 20;      reuse 1000; } 

To put damping into effect, the highlighted changes are needed in r6's BGP stanza. The damping keyword could have been applied globally if desired because it will affect only EBGP peering sessions:

[edit] lab@r6# show protocols bgp group 65002 {      type internal;     local-address 10.0.9.6;      export ibgp;      neighbor 10.0.9.7;      neighbor 10.0.3.5; } group t2 {      type external;     damping;      import damp;      neighbor 172.16.0.22 {          peer-as 65222;     } }

Before proceeding, be sure to replicate the damping-related changes shown for r6 on r3. To verify damping, you can clear your EBGP neighbor sessions several times and then display hidden routes, or use the show route damping suppressed command. Displaying hidden routes with the detail switch will allow you to see what damping parameters a given prefix is being subjected to, which allows you to confirm the proper prefix to damping profile mappings.

We now examine the case study's criteria for route filtering and tagging, beginning with the context of a customer-attached router:

  • Use communities to tag routes based on the EBGP peering point where they are learned. Ensure that routes learned from each peering point can be uniquely identified.

  • Accept all customer routes that have originated in customer sites to accommodate the C1-C2 EBGP peering shown earlier in Figure 6.6.

  • Accept no routes with prefixes longer than a /26.

  • Use local preference so that customer routes are preferred over transit routers.

  • Do not accept any default routes or RFC 1918 routes from EBGP peers.

  • Remove all communities received from the P1 router.

Table 6.2 contains this author's choice of a unique community-tagging scheme that meets the case study's requirements.

Table 6.2: Community Value Assignments

Peer Designation

Community Base

Transit

65412:10x

Peer

65412:20x

Customer

65412:30x

The plan here is to code the last digit of the community value with the number associated with each peer such that P1 will be identified by the community value of 65412:201, for example.

Customer-Attached Routers r4 and r7

The policy, community, and AS path definitions shown next will accommodate the import policy needs of r4 and r7. Note that the local preference setting required to ensure that customer routes will be preferred over transit providers, which will use the default value of 100, has been incorporated into the cust-filter-in policy:

[edit policy-options policy-statement cust-filter-in] lab@r4# show term rfc1918 {      from {         route-filter 10.0.0.0/8 orlonger reject;         route-filter 192.168.0.0/16 orlonger reject;         route-filter 172.16.0.0/12 orlonger reject;         route-filter 0.0.0.0/0 through 0.0.0.0/32 reject;      } } term kill-27-or-longer {      from {         route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;      } } term tag-c1 {     from as-path cust-1;      then {          community add cust-1;      } } term tag-c2 {      from as-path cust-2;      then {          community add cust-2;      } } term prefer-cust {     from as-path [ cust-1 cust-2 ];      then {          local-preference 101;         next policy;      } } term kill-rest {     then reject; } [edit policy-options] lab@r4# show | match 65412 community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; [edit policy-options] lab@r4# show | match 650 as-path cust-1 ".* 65010"; as-path cust-2 ".* 65020";

The AS path regx definitions and the specifics for their use in the cust-filter-in policy provide the required local preference modification action for customer routes that may be re- advertised by customer sites after being coupled through the C1-C2 EBGP connection shown earlier in Figure 6.6. The kill-rest term rejects routes that have not matched the prefer-cust term, thereby eliminating any routes that did not originate in a customer network.

The policy-related changes made on r4 need to be copied over to r7 before proceeding to the next set of criteria.

Peer-Attached Routers r1 and r2

Routers that peer with P1 have a similar need for community definitions and Martian filtering, but also have the following unique requirements:

  • Prepend 64512 64512 to all routes received from P1. Ensure that transit providers do not receive these AS numbers.

  • Ensure that all routers in your AS forward through r2 to reach peer prefixes when r2 is operational.

  • Remove all communities received from the P1 router.

The peer-filter-in policy statement shown next will accommodate the import route filtering, community stripping, and AS path prepending needs of r1. The term ordering is important because placing the no-comms term after the tag-p1 term will result in the removal of your community tags as well as those that may have been present in the routes received from P1:

[edit] lab@r1# show policy-options policy-statement peer-filter-in term rfc1918 {      from {         route-filter 10.0.0.0/8 orlonger reject;         route-filter 192.168.0.0/16 orlonger reject;         route-filter 172.16.0.0/12 orlonger reject;         route-filter 0.0.0.0/0 through 0.0.0.0/32 reject;      } } term kill-27-or-longer {      from {         route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;      } } term no-comms {      then {         community delete all-comms;      } } term tag-p1 {     from as-path peer-1;      then {         community add peer-1;         as-path-prepend "64512 64512";      } } 

You must define your named communities and AS paths in order to commit the peer- filter-in policy:

[edit] lab@r1# show policy-options community all-comms members *:*; [edit] lab@r1# show policy-options | match 65412 community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; [edit] lab@r1# show policy-options | match 1492 as-path peer-1 ".* 1492";

The peer-filter-in policy must be applied as import to the P1 EBGP peer group before proceeding. The highlighted addition of the prefer-2 term adapts r1's import policy for use at r2. The new term ensures that all routers in your network will prefer r2 to r1 when forwarding to provider prefixes. r1's community and AS path definitions will need to be carried over to r2 as well:

[edit] lab@r1# show policy-options policy-statement peer-filter-in term rfc1918 {      from {         route-filter 10.0.0.0/8 orlonger reject;         route-filter 192.168.0.0/16 orlonger reject;         route-filter 172.16.0.0/12 orlonger reject;         route-filter 0.0.0.0/0 through 0.0.0.0/32 reject;      } } term kill-27-or-longer {      from {         route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;      } } term no-comms {      then {         community delete all-comms;      } } term tag-p1 {     from as-path peer-1;      then {          community add peer-1;         as-path-prepend "64512 64512";      } } term prefer-2 {     from community peer-1;      then {          local-preference 101;      } } 

The peer-filter-in policy's successful filtering of the default routes advertised by P1 allows r1 and r2 to reestablish their IBGP session to the level 1 area's attached routers r3 and r4, respectively. The correct AS path prepending behavior and, in the case of r2, the modified local preference values, are easy to verify:

[edit] lab@r2# run show route advertising-protocol bgp 10.0.3.4 inet.0: 117260 destinations, 117261 routes (117256 active, 0 holddown, 5 hidden) + = Active Route, - = Last Active, * = Both 3.4.0.0/20 10.0.5.254               0        101 64512 64512 1492  I 6.0.0.0/7 10.0.5.254               0        101 64512 64512 1492  I 120.120.0.0/24 10.0.5.254               0        101 64512 64512 1492  I 120.120.1.0/24 10.0.5.254               0        101 64512 64512 1492  I . . . 192.168.20.0/24 Self                     0        100 I 

The output confirms that r2 is correctly adjusting local preference and performing its AS path prepending operations on the routes learned from the P1 peering session only. Successful community removal and the addition of locally-defined communities can be verified as shown next:

[edit] lab@r2# run show route advertising-protocol bgp 10.0.3.4 detail | match comm      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities: 65412:201      Communities:

The output confirms that only the locally added communities are now attached to the routes. If P1 had added any communities, they are gone now! To complete the configuration of these import policy tasks, you must enable the remove-private option on r3 and r6 so that transit providers do not receive the private AS numbers added by r1 and r2. Be careful to apply remove- private only to the peer group containing the router's EBGP peers, because removing the private AS numbers associated with your subconfederation can break things in a really bad way:

[edit protocols bgp] lab@r3# show group 65000 {      type internal;      local-address 10.0.3.3;      hold-time 180;      export ibgp;      neighbor 10.0.6.1; } group c-bgp {      type external;     multihop;      local-address 10.0.3.3;      export ibgp;     neighbor 10.0.3.4 {          peer-as 65001;      }     neighbor 10.0.3.5 {         peer-as 65002;      } } group t1-t2 {      type external;     damping;     import damp;     remove-private;      peer-as 65222;     multipath;      neighbor 172.16.0.14;      neighbor 172.16.0.18; } 

Because the removal of locally added private AS numbers and the addition of the confederation AS number occurs after the processing of the show route advertising protocol bgp <neighbor> command, you might opt to rely on faith when attempting to confirm that you have successfully removed the private AS numbers in the EBGP advertisements being sent to transit providers. In fact, the command output shown next could easily lead a candidate into believing that remove-private does not work, or worse, it might lead them down the path of unnecessary reconfigurations when there are better things for the exam candidate to be spending time on:

[edit protocols bgp] lab@r3# run show route advertising-protocol bgp 172.16.0.14 120.120.2/24 inet.0: 117696 destinations, 235334 routes (117650 active, 0 holddown, 95 hidden) + = Active Route, - = Last Active, * = Both 120.120.2.0/24 Self                                 (65001) 64512 64512 1492 I

As an alternative to the 'just have faith' approach, you can monitor BGP advertisements in the reverse (receive) direction, after enabling keep all on r3 or r6 to leverage the JUNOS software behavior of advertising EBGP routes back to their source.

After enabling keep all at the global level of r3's BGP stanza, we verify that peer router P1's routes, as echoed by T1 back to r3, have the expected AS path. You may need to soft-clear the EBGP sessions to the transit providers so they will re-advertise routes that previously failed incoming sanity checks:

 [edit protocols bgp] lab@r3# run show route receive-protocol bgp 172.16.0.14 120.120.2/24 hidden inet.0: 117687 destinations, 470620 routes (117673 active, 0 holddown, 235331 hidden) + = Active Route, - = Last Active, * = Both 120.120.2.0/24 172.16.0.13                          65222 65412 1492 I

Be sure to remove the keep all setting when you are satisfied that all is working as required, and do not forget to add remove-private to the EBGP peer group containing T2 on r6.

Transit-Attached Routers r3 and r6

To complete this task, you need to apply a Martian filtering and community tagging policy to r3 and r6. Because r3 is dual-homed with EBGP peers in the same remote AS, you will have trouble adding the required peer-based community tags using AS path regx matching due to the transit routes having the same AS path. A sample policy for r3 is shown next with highlights calling out its use of neighbor-based matching for its community tagging operations:

[edit policy-options] lab@r3# show policy-statement transit-filter-in term rfc1918 {      from {         route-filter 10.0.0.0/8 orlonger reject;         route-filter 192.168.0.0/16 orlonger reject;         route-filter 172.16.0.0/12 orlonger reject;      } } term kill-27-or-longer {      from {         route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;      } } term tag-t1 {      from neighbor 172.16.0.14;      then {          community add trans-1;      } } term tag-t2 {      from neighbor 172.16.0.18;      then {          community add trans-2;      } } 

A similar policy should be added to r6, although r6 can make use of AS path regx if you desire. The same set of community definitions shown for the other routers should also be added to r3 and r6 before proceeding to the next section. Once again, do not forget to apply your transit-filter-in policy as import to the EBGP peer group on both r3 and r6.

CONFIRM IMPORT POLICY

You should display the active and hidden routes being received from each EBGP peer to confirm the effects of your Martian filters, community tagging, community removal, and local preference settings before proceeding to the next section.

EBGP Export Policy and Hidden Route Repair

This section addresses your remaining EBGP and IBGP export policy-related tasks. You begin with the following criterion, which is somewhat all-encompassing:

  • You cannot have any black holes or suboptimal routing.

The lack of next hop self policies, coupled with the restrictions that surround your ability to advertise the majority of the 172.16/12 networks used to support your EBGP peerings, has resulted in a number of hidden routes and the corresponding potential for inefficient routing. Recall that the specifics of your setup currently have r3 running a passive IS-IS instance on the fe-0/0/2 interface, which allows next-hop resolution for transit routes within your AS. The suboptimal routing situation can be observed on r7, which has only installed transit routes learned from the r3-T1 peering due to its inability to resolve the BGP next hop associated with the advertisements coming from r6:

[edit] lab@r7# run show route 130.130/16 hidden detail inet.0: 117617 destinations, 235039 routes (117567 active, 98 holddown, 65789 hidden) 130.130.0.0/16 (1 entry, 0 announced)          BGP    Preference: 170/-101                 Next hop type: Unusable                 State: <Hidden Int Ext>                 Local AS: 65002 Peer AS: 65002                 Age: 1:37:14 Metric: 0                 Task: BGP_65002.10.0.9.6+179                 AS path: 65222 I                 Localpref: 100                 Router ID: 10.0.9.6 

Having r7 forward all transit traffic through r5 and r3, when it could have taken a single hop through r6, is not optimal:

 [edit] lab@r7# run traceroute 130.130.0.1 traceroute to 130.130.0.1 (130.130.0.1), 30 hops max, 40 byte packets  1 10.0.8.9 (10.0.8.9) 0.400 ms 0.329 ms 0.254 ms  2 10.0.2.2 (10.0.2.2) 0.791 ms 0.592 ms 0.918 ms  3 * * * ^C 

In this case, the traceroute fails because the 10/8 aggregate for your AS has not yet been exported to EBGP peers. To fix these hidden routes, you must either adjust your existing IBGP export policy or write a new one that will selectively overwrite the BGP next hop on routes learned from EBGP peers. As mentioned in the chapter body, care should be taken to ensure you do not alter the next hop on routes learned from IBGP because this behavior could lead to suboptimal routing for internal destinations. The highlighted policy changes in r3's existing ibgp policy leverage your community tags to provide the desired EBGP next-hop rewrite behavior:

[edit policy-options policy-statement ibgp] lab@r3# show term 1 { term 1 {      from {         protocol static;         route-filter 192.168.30.0/24 exact;      }     then accept; } term 2 {     from community [ trans-1 trans-2 ];      then {         next-hop self;      } } 

The logical OR function of term 2's community-matching criteria will result in r3 setting itself as the next hop for routes tagged with either the 65412:101 or 65412:102 communities. An alternative approach would be to define a new, regx-based, community that matches on the occurrence of either transit provider community tag, which allows you to list a single named community in your policy, as shown next:

community trans-1-2-a members 65412:10.; community trans-1-2-b members 65421:101-2; ...  term 2 {     from community trans-1-2-a;     then {         next-hop self;     } } 

You should define similar next hop self-related IBGP export policies for all routers with EBGP peers, with the exception of r1 and r2 because their EBGP peering subnet is already being redistributed, into IS-IS before you proceed to the next section. Be sure to match on both customer site communities at r4 and r7 to accommodate routes coupled through the C1-C2 EBGP peering session. After committing changes on all affected routers, you retest the path from r7 to the 130.130.0.1 prefix to confirm that it can now forward optimally through r6:

[edit] lab@r7# run traceroute 130.130.0.1 traceroute to 130.130.0.1 (130.130.0.1), 30 hops max, 40 byte packets  1 10.0.8.1 (10.0.8.1) 0.300 ms 0.209 ms 0.123 ms  2 *^C 

After you have successfully exported your 10/8 aggregate, the traceroute should complete normally. You should verify that no routers have hidden routes caused by unreachable next hops before proceeding. Looking at the state of a router like r5 provides a quick indication of the overall success of your hidden route repair efforts:

[edit] lab@r5# run show route inet.0: 112736 destinations, 337716 routes (112736 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 3.0.0.0/8           *[BGP/170] 02:06:16, localpref 100, from 10.0.3.3                        AS path: (65000) 65222 10458 3944 2914 7018 80 I                      > to 10.0.2.2 via at-0/2/1.35 . . .

We now examine solutions for the following EBGP export policy requirements:

  • Originate three NLRI advertisements to EBGP peers reflecting your 10/8 space, the OSPF router's routes, and the OSPF subnets, without altering the routing-options stanza on r3, r4, r6, and r7.

  • You must not use generated routes, but a single static route is permitted on both r1 and r2. Individual interface or link failures cannot disrupt the connectivity to P1.

r3, r4, r6, and r7

The requirement that you advertise aggregate routes, without modifying the routing-options stanza on r3, r4, r6, and r7, means that you will have to define a 10/8 aggregate on r5, and then adjust its IBGP export policy to advertise the new 10/8 aggregate along with its existing 192.168.0/22 and 172.16.40/29 aggregates to all other routers through IBGP. The default EBGP export policy will function to have all three aggregates re-advertised to transit and customer sites, assuming that the IBGP advertisements are active and that you have not negated the default EBGP policy, as per the examples provided in this chapter. The following ibgp policy change and aggregate route definition on r5 creates the necessary aggregate route advertisement behavior:

[edit] lab@r5# show policy-options policy-statement ibgp term 1 {      from {         protocol static;         route-filter 192.168.50.0/24 exact;      }      then accept; } term 2 {      from {         protocol aggregate;          route-filter 10.0.0.0/8 exact;          route-filter 192.168.0.0/22 exact;          route-filter 172.16.40.0/29 exact;      }      then accept; } [edit] lab@r5# show routing-options aggregate route 10.0.2.0/23; route 10.0.8.0/21; route 192.168.0.0/22; route 172.16.40.0/29; route 10.0.0.0/8; 

The results should be confirmed for all transit and customer peering points before you move on to dealing with r1 and r2. The following output confirms that r4 is not advertising the 192.168.0/22 and 172.16.40/29 routes because of a protocol preference problem caused by r5's redistribution of the same routes into the level 2 IS-IS backbone:

 [edit] lab@r4# run show route advertising-protocol bgp 200.200.0.1 192.168.0/22 [edit] lab@r4# run show route advertising-protocol bgp 200.200.0.1 172.16.40/29  [edit] lab@r4# run show route 192.168.0/22 inet.0: 112467 destinations, 224853 routes (112465 active, 0 holddown, 2 hidden) + = Active Route, - = Last Active, * = Both 192.168.0.0/22       *[IS-IS/165] 03:25:27, metric 11, tag 2                       > to 10.0.2.9 via as0.0                       [BGP/170] 01:02:11, localpref 100, from 10.0.3.5                         AS path: (65002) I                       > to 10.0.2.9 via as0.0

The addition of advertise-inactive to r4's BGP stanza allows the router to advertise the best BGP routes that are inactive due to protocol preference, which resolves this problem nicely. Applying the option at the EBGP peer group level will prevent the advertisement of the inactive summary routes to r2, so be sure to apply the advertise-inactive option at the global level (or multiple times within each peer group):

[edit] lab@r4# set protocols bgp advertise-inactive [edit] lab@r4# commit commit complete [edit] lab@r4# run show route advertising-protocol bgp 200.200.0.1 172.16.40/29 inet.0: 112491 destinations, 224898 routes (112486 active, 3 holddown, 2 hidden) + = Active Route, - = Last Active, * = Both 172.16.40.0/29 Self                                  (65002) I

Based on these results, you should make a point of adding the advertise-inactive option to r3 also. This option is not needed on r6 and r7 because the 192.168.0/22 and 172.16.40/29 aggregate routes are not leaked into their level 1 IS-IS area by r5, which causes r5's IBGP advertisement of the 192.168.0/22 and 172.16.40/29 route to be active. The correct behavior for summary route advertisements to EBGP peers is observed at r6:

 [edit] lab@r6# run show route advertising-protocol bgp 172.16.0.22 10/8 inet.0: 117883 destinations, 228894 routes (113047 active, 0 holddown, 4843 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8 Self                                  I [edit] lab@r6# run show route advertising-protocol bgp 172.16.0.22 172.16.40/29 inet.0: 117883 destinations, 228895 routes (113045 active, 0 holddown, 4846 hidden) + = Active Route, - = Last Active, * = Both 172.16.40.0/29 Self                                  I [edit] lab@r6# run show route advertising-protocol bgp 172.16.0.22 192.168.0/22 inet.0: 117883 destinations, 228895 routes (113045 active, 0 holddown, 4846 hidden) + = Active Route, - = Last Active, * = Both 192.168.0.0/22 Self                                 I

r1 and r2

Both r1 and r2 should have active 192.168.0/22 and 172.16.40/29 BGP routes, so the default policy should already result in their advertisements to the P1 router:

[edit] lab@r2# run show route advertising-protocol bgp 10.0.5.254 192.168.0/22 inet.0: 110768 destinations, 110769 routes (110764 active, 0 holddown, 5 hidden) + = Active Route, - = Last Active, * = Both 192.168.0.0/22 Self                                (65000 65002) I [edit] lab@r2# run show route advertising-protocol bgp 10.0.5.254 172.16.40/29 inet.0: 110768 destinations, 110769 routes (110764 active, 0 holddown, 5 hidden) + = Active Route, - = Last Active, * = Both 172.16.40.0/29 Self                                (65000 65002) I 

The problem with these routers involves getting the 10/8 aggregate sent. Both routers have hidden this route to prevent recursion loops, as discussed in the 'Verify Peer Site Export Policy' section earlier in this chapter. The case study's criteria indicate that you cannot use generated routes, which was the approach chosen in the 'Verify Peer Site Export Policy' section, but do permit a single static route on each of these routers as long as the failure of a single interface or link does not isolate the P1 router. Defining a 10/8 static route that points to discard or reject would be no better than an aggregate route, in that the presence of such a route would black- hole destinations outside of the level 1 IS-IS area.

Based on the requirements, it would seem that your best bet will be to define a static route that makes use of qualified next hops so that the failure of the primary forwarding path will cause traffic to follow one or more qualified next hops that are attached to the static route. The commands needed to define the static route with a qualified next hop on r1 are shown next:

[edit routing-options static] lab@r1# set route 10/8 next-hop 10.0.4.13 [edit routing-options static] lab@r1# set route 10/8 qualified-next-hop 10.0.4.6 preference 10 

The results from a similar configuration on r2 are displayed:

[edit routing-options static] lab@r2# run show route 10/8 inet.0: 110644 destinations, 110647 routes (110641 active, 0 holddown, 5 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8          *[Static/5] 00:00:04                      > to 10.0.4.9 via fe-0/0/1.0                      [Static/10] 00:00:04                      > to 10.0.4.1 via fe-0/0/2.0 

With an active 10/8 prefix that provides the required redundancy, all that is required to finish this section is the creation (and application) of a simple static route redistribution policy on r1 and r2. The policy example, and its application as export shown here, do the job:

[edit] lab@r1# show policy-options policy-statement p1-export term 1 {     from {         protocol static;         route-filter 10.0.0.0/8 exact;     }     then accept; } [edit] lab@r1# show protocols bgp group p1 type external; import peer-filter-in; export p1-export; neighbor 10.0.5.254 {    peer-as 1492;  }

After committing the policy definition and application as export to the p1 peer group, the 10/8 advertisement is confirmed:

[edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 10/8 inet.0: 110627 destinations, 110641 routes (110624 active, 0 holddown, 5 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8 Self                                  I

With all routers adverting the three summaries for the prefixes in your AS, you should be able to conduct traceroute and ping testing to the loopback addresses of all external peers. A few examples of proper forwarding behavior as taken from r5 are shown next. This router is chosen because of its lack of EBGP peering sessions.

lab@r5> traceroute 120.120.0.1 traceroute to 120.120.0.1 (120.120.0.1), 30 hops max, 40 byte packets  1 10.0.2.10 (10.0.2.10) 0.734 ms 0.557 ms 0.502 ms  2 10.0.4.10 (10.0.4.10) 0.436 ms 0.432 ms 0.415 ms  3 120.120.0.1 (120.120.0.1) 0.613 ms 0.927 ms 0.612 ms lab@r5> traceroute 130.130.0.1 traceroute to 130.130.0.1 (130.130.0.1), 30 hops max, 40 byte packets 1 10.0.2.2 (10.0.2.2) 1.194 ms 0.843 ms 1.047 ms  2 130.130.0.1 (130.130.0.1) 0.994 ms 1.021 ms 1.077 ms lab@r5> traceroute 200.200.0.1 traceroute to 200.200.0.1 (200.200.0.1), 30 hops max, 40 byte packets  1 10.0.2.10 (10.0.2.10) 0.597 ms 0.552 ms 0.513 ms  2 200.200.0.1 (200.200.0.1) 0.567 ms 0.471 ms 0.459 ms lab@r5> traceroute 201.201.0.1 traceroute to 201.201.0.1 (201.201.0.1), 30 hops max, 40 byte packets  1 10.0.8.10 (10.0.8.10) 0.534 ms 0.438 ms 0.400 ms  2 201.201.0.1 (201.201.0.1) 0.507 ms 0.487 ms 0.482 ms 

start sidebar
More Than One Way to Skin a Cat

The 'aggregate route in a stub/level 1 area' problem was solved twice in this chapter: once with a generated route, and again in this section with a qualified next hop static route. Other creative solutions to this problem include the following:

  • Although definitely unorthodox, establishing multi-hop EBGP sessions from r3 and r4 to the P1 router will allow you to advertise the necessary summary routes. Whether this will work depends on the P1 router being preconfigured with the statements needed to permit these peerings, however.

  • Converting a totally stubby area into a stub area by removing the no-summaries option, or in the case of IS-IS level 1 areas, applying an appropriate L2-L1 route leaking policy, both result in the presence of more-specific routes that will prevent a local aggregate from creating a black hole.

  • Yet another solution involves the application of an IBGP export policy on backbone area routers to effect the redistribution of backbone area IGP routes to the routers in your IS-IS level 1 or OSPF totally stubby area.

  • As mentioned previously in this chapter, you could also adjust the IBGP export policy on the stub area's ABRs so that the BGP next hop associated with the aggregate route is set to a prefix that is internal to the stub area instead of the default loopback interface-based router ID.

You will need to read the specified criteria carefully to determine which, if any, of the options demonstrated in this chapter are permitted during a given JNCIP lab exercise. Always ask the proctor for clarification when there is any doubt as to the restrictions imposed on a given lab scenario.

end sidebar

With the corrected aggregate route advertisement behavior confirmed, the following case study criteria will be analyzed.

  • Send peer EBGP routes to all sites. Do not send transit provider EBGP routes to peer P1.

  • Customers receive all EBGP routes, and all sites receive customer EBGP routes.

Because the default BGP policy already advertises all active BGP routes, most routers only require an EBGP export policy that will block the 192.168.x.0/24 routes (being redistributed from static into IBGP from each router) to satisfy their route filtering requirements. This policy will work for customer and transit provider attached routers:

[edit] lab@r4# show policy-options policy-statement no-192-24s term 1 {     from {         route-filter 192.168.0.0/16 prefix-length-range /24-/32 reject;      } } [edit] lab@r4# show protocols bgp group c1 type external; multihop; local-address 10.0.3.4; import cust-filter-in; family inet {     unicast {         prefix-limit {             maximum 10;         }      } } export no-192-24s; peer-as 65010; neighbor 200.200.0.1 {     authentication-key "$9$n2/i9tOMWx7VY"; # SECRET-DATA }

When applied to EBGP peers as shown, proper operation is confirmed:

[edit] lab@r4# run show route advertising-protocol bgp 200.200.0.1 192.168.70/24 [edit] lab@r4# run show route advertising-protocol bgp 200.200.0.1 192.168.0/22 inet.0: 112821 destinations, 225592 routes (112819 active, 0 holddown, 2 hidden) + = Active Route, - = Last Active, * = Both 192.168.0.0/22 Self                                 (65000 65002) I 

For r1 and r2, you now adjust the existing p1-export policy so that it blocks the routes learned from your transit providers as well as the 192.168.x/24 prefixes. To filter the transit routes, you could write the policy to match on the unique communities assigned to each transit provider, or you could opt for an AS path regx to leverage the fact that all transit routes have the transit network's AS in common.

The AS path regx approach is demonstrated here because community-based route filtering was shown in the body of this chapter. The highlighted changes to the p1-export policy combined with the correct AS path regx will correctly filter the transit and 192.168.x/24 routes from the EBGP advertisements sent to P1:

[edit policy-options policy-statement p1-export] lab@r1# show term 1 { term 1 {      from {         protocol static;         route-filter 10.0.0.0/8 exact;      }      then accept; } term 2 {     from as-path trans;      then reject; } term 3 {      from {         route-filter 192.168.0.0/16 prefix-length-range /24-/32;      }      then reject; } [edit policy-options] lab@r1# show as-path trans ".* 65222 .*"; 

Many exam candidates would think that all routes coming from transit providers should begin with 65222 and that an AS path regx like 65222 .* would therefore work to filter these routes. While this might be true for a full mesh or pure route reflection topology, the use of a confederation means these routes may very well contain your network's subconfederation AS numbers when received by r1 and r2 as shown here:

[edit] lab@r2# run show route aspath-regex ".* 65222 .*" inet.0: 112978 destinations, 112981 routes (112975 active, 0 holddown, 5 hidden) + = Active Route, - = Last Active, * = Both 3.0.0.0/8           *[BGP/170] 08:33:08, localpref 100, from 10.0.3.4                       AS path: (65000) 65222 10458 3944 2914 7018 80 I                      > to 10.0.4.9 via fe-0/0/1.0 4.0.0.0/8           *[BGP/170] 08:33:08, localpref 100, from 10.0.3.4                       AS path: (65000) 65222 10458 3944 2914 1 I                      > to 10.0.4.9 via fe-0/0/1.0

Making assumptions can lead to the commission of 'simple' mistakes when in the JNCIP lab. Most successful JNCIP exam candidates have developed work habits that result in their always taking the time to verify the effects of their configurations and policies, because after all, everybody who has ever made an assumption has been proven wrong at some point! The following commands are used to confirm proper P1 export policy operation:

[edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 | match 65222 [edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 192.168.60/24 [edit] lab@r1#

Good, no transit or 192.168.60/24 routes are present. Now to verify the advertisement of the customer routes:

 [edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 200.200.2/24 inet.0: 112960 destinations, 112963 routes (112957 active, 0 holddown, 5 hidden) + = Active Route, - = Last Active, * = Both 200.200.2.0/24 Self                                  (65001) 65010 I [edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 201.201.0/24 inet.0: 112960 destinations, 112963 routes (112957 active, 0 holddown, 5 hidden) + = Active Route, - = Last Active, * = Both 201.201.0.0/24 Self                                 (65001 65002 65413) 65020 I

Routes from both customer sites are being sent. Combined with previous confirmation of the required summary route advertisements, this behavior indicates that you have accomplished the goals of this section. To finish this case study, you have the following requirements to deal with:

  • r6 must advertise a MED to T2 based on its IGP metrics.

  • Ensure that transit providers use the r6 peering link when forwarding traffic to customer destinations without setting MED on r3.

The first item requires the configuration of a direct MED setting on r3 that will track the IGP cost for each prefix advertised. The highlighted change instructs r6 to set the MED to equal its current IGP cost to each destination:

[edit protocols bgp group t2] lab@r6# show type external; metric-out igp; damping; import [ damp transit-filter-in ]; export no-192-24s; remove-private; neighbor 172.16.0.22 {    peer-as 65222;  }

Displaying the MED setting in the routes advertised from r6 to T2 and comparing the value to the IGP metric for a given route confirms that this requirement has been met:

 [edit] lab@r6# run show route advertising-protocol bgp 172.16.0.22 10/8 detail inet.0: 117914 destinations, 230419 routes (112632 active, 0 holddown, 5291 hidden) 10.0.0.0/8 (1 entry, 1 announced)  BGP group t2 type External      Nexthop: Self      MED: 15      AS path: I      Aggregator: 65002 10.0.3.5      Communities: [edit] lab@r6# run show route resolution 10/8 Table inet.3 Nodes 0 10.0.0.0/8 Originating RIB: inet.0   Metric: 15  Node path count: 1   Indirect nexthops: 1         Protocol Nexthop: 10.0.3.5 Metric: 15         Indirect nexthop: 84ed110 122         Indirect path forwarding nexthops: 1                 Nexthop: 10.0.8.6 via fe-0/1/0.0

The last requirement is to make transit providers T1 and T2 prefer the T2-r6 peering point when forwarding to customer prefixes, without setting MED on r3. T2 currently prefers the r3 peering point due to its preferred MED setting. Recall that no MED equals 0, so having r6 advertise any non-zero MED makes it less attractive to T2. Even without MED, r3's lower RID would still cause it to be the active source of your customer's routes. The current customer prefix forwarding behavior at T2 is shown here:

lab@T2> show route 200.200.1/24 detail inet.0: 117819 destinations, 342989 routes (117819 active, 0 holddown, 112543 hidden) 200.200.1.0/24 (3 entries, 1 announced)         *BGP    Preference: 170/-101                 Source: 172.16.0.17                 Nexthop: 172.16.0.17 via fe-0/0/2.0, selected                 State: <Active Ext>                 Local AS: 65222 Peer AS: 65412                 Age: 8:04:18                 Task: BGP_65412.172.16.0.17+1026                 Announcement bits (3): 0-KRT 3-BGP.0.0.0.0+179 4-Resolve inet.0                 AS path: 65412 I                 Communities: 65412:301                 Localpref: 100                 Router ID: 10.0.3.3          BGP    Preference: 170/-101                 Source: 10.0.1.65                 Nexthop: 10.0.1.65 via fe-0/0/0.0, selected                 Protocol Nexthop: 10.0.1.65 Indirect nexthop: 8426000 46                 State: <NotBest Int Ext>                 Inactive reason: Interior > Exterior > Exterior via Interior                 Local AS: 65222 Peer AS: 65222                 Age: 8:04:18    Metric2: 0                 Task: BGP_65222.10.0.1.65+179                 AS path: 65412 I                 Communities: 65412:301                 Localpref: 100                 Router ID: 130.130.0.1          BGP    Preference: 170/-101                 Source: 172.16.0.21                 Nexthop: 172.16.0.21 via fe-0/0/1.0, selected                 State: <NotBest Ext>                 Inactive reason: Not Best in its group                 Local AS: 65222 Peer AS: 65412                 Age: 11:36      Metric: 15                 Task: BGP_65412.172.16.0.21+179                 AS path: 65412 I                 Communities: 65412:301                 Localpref: 100                 Router ID: 10.0.9.6 

Because you cannot configure r3 to send a less attractive MED to T2, you need to find a tie- breaking condition that is evaluated before MED in the active route selection process that you can control from r3. If you are thinking "AS path prepending," then this author thinks you are thinking right!

The new prepend policy statement is designed to have r3 add two extra copies of your network's AS number to any customer routes it sends to T1 or T2.

[edit] lab@r3# show policy-options policy-statement prepend term 1 {    from community [ cust-1 cust-2 ];    then as-path-prepend "65412 65412";  } 

Note that having r3 prepend the routes it sends to T2, but not to T1, will result in T2 choosing to forward through T1, which in turn will forward through r3 to reach customer prefixes due to the higher MED settings T2 sees in the routes coming from r6. A correct application of the prepend policy is shown next:

[edit] lab@r3# show protocols bgp group t1-t2 type external; damping; import [ damp transit-filter-in ]; export [ no-192-24s prepend ]; remove-private; peer-as 65222; multipath; neighbor 172.16.0.14; neighbor 172.16.0.18;

The desired forwarding behavior is now observed on T2:

lab@T2> show route 200.200.1/24 detail inet.0: 117738 destinations, 343163 routes (117738 active, 0 holddown, 112633 hidden) 200.200.1.0/24  (2 entries, 1 announced)          *BGP    Preference: 170/-101                  Source: 172.16.0.21                  Nexthop: 172.16.0.21 via fe-0/0/1.0, selected                  State: <Active Ext>                  Local AS: 65222 Peer AS: 65412                  Age: 59:16 Metric: 15                  Task: BGP_65412.172.16.0.21+179                  Announcement bits (3): 0-KRT 3-BGP.0.0.0.0+179 4-Resolve inet.0                  AS path: 65412 I                  Communities: 65412:301                  Localpref: 100                  Router ID: 10.0.9.6           BGP    Preference: 170/-101                  Source: 172.16.0.17                  Nexthop: 172.16.0.17 via fe-0/0/2.0, selected                  State: <Ext>                  Inactive reason: AS path                  Local AS: 65222 Peer AS: 65412                  Age: 12:05                  Task: BGP_65412.172.16.0.17+1035                  AS path: 65412 65412 65412 I                  Communities: 65412:301                  Localpref: 100                  Router ID: 10.0.3.3 

Verifying that you have met the customer traffic weighting requirements is more difficult if you do not have access to the transit routers. You can easily observe the presence of prepended AS numbers in your advertisements to transit providers, but knowing how the remote routers react to them is another story.

[edit] lab@r3# run show route advertising-protocol bgp 172.16.0.18 200.200.1/24 inet.0: 117757 destinations, 242451 routes (112934 active, 0 holddown, 15949 hidden) + = Active Route, - = Last Active, * = Both 200.200.1.0/24 Self                                  65412 65412 [65000] (65001) 65010 I

By enabling keep all on r3 (and performing a soft clearing of its EBGP peers), you can display the routes that T1 is advertising to r3, and from this information reverse engineer that it has not chosen the route with prepended AS numbers. The keep all option is needed to allow the storage of routes with AS path loops in the router's RIB-in, as you expect to see T1 re-advertising the 200.200.1/24 route back to r3, and whether it was learned from r3 or r6, the local AS number will be present:

[edit] lab@r3# run show route receive-protocol bgp 172.16.0.14 200.200.1/24 hidden inet.0: 117762 destinations, 461223 routes (112905 active, 1 holddown, 235466 hidden) + = Active Route, - = Last Active, * = Both 200.200.1.0/24 172.16.0.14                           65222 65412 I

The presence of the transit provider's AS number and the single occurrence of the local AS number indicate that T1 has installed the customer routes it receives from T2-otherwise you would see r3's routes echoed back. Knowing that r3 is prepending AS numbers in the routes it sends to both of its transit providers, coupled with the knowledge that the routes they re-advertise do not contain any prepended AS numbers, provide compelling proof that they both have decided to install the route learned from r6 as the active route. The deactivation of r6's EBGP peering session provides definitive proof, because soon after you will begin to observe that both T1 and T2 begin re-advertising routes with the prepended AS numbers:

 [edit] lab@r6# deactivate protocols bgp group t2 [edit] lab@r6# commit commit complete [edit] lab@r3# run show route receive-protocol bgp 172.16.0.14 200.200.1/24 hidden   detail inet.0: 117898 destinations, 460474 routes (112395 active, 0 holddown, 235788 hidden) 200.200.1.0/24 (3 entries, 1 announced)      Nexthop: 172.16.0.13      AS path: 65222 65412 65412 65412 I (Looped: 65000)      Communities: 65412:301

Final Confirmation

The complexity of this case study would make the printing of all required verification steps and the corresponding operational mode output cumbersome for the reader and expensive for the forest that would have to be sacrificed to print the extra pages. This section provided key examples of how you can monitor the operation of your BGP configuration and related policies. Before considering a case study of this magnitude complete, validate your work by asking yourself the following questions:

  • Are all your IBGP and EBGP sessions still established?

  • Are there any hidden routes due to unreachable next hops?

  • Are you exporting the correct routes to all peers?

  • Have you filtered routes from each peer according to the defined list of Martians?

  • Are the required MED, local preference, prepended AS numbers, and community tags present?

  • Do you still have full reachability to internal destinations over optimal paths?

  • Can all your routers trace the route to each EBGP peer's loopback address using optimal paths? Did you remember to verify connectivity from the OSPF router to your EBGP peers?

  • Is damping working?

  • Are IBGP and EBGP load balancing working?

EBGP Case Study Configurations

The modified configuration stanzas needed to complete the EBGP case study as built on top of the chapter confederation topology are shown in Listings 6.1 through 6.7 for all routers in the test bed, with changes highlighted.

Listing 6.1: r1 EBGP-Related Configuration

start example
[edit] lab@r1# show protocols bgp group 65000 {      type internal;      local-address 10.0.6.1;     export ibgp;      neighbor 10.0.3.3; } group p1 {      type external;      import peer-filter-in;      export p1-export;     neighbor 10.0.5.254 {          peer-as 1492;      } } [edit] lab@r1# show routing-options static {     route 10.0.200.0/24 next-hop 10.0.1.102;     route 192.168.10.0/24 reject;     route 10.0.0.0/8 {          next-hop 10.0.4.13;         qualified-next-hop 10.0.4.6 {              preference 10;         }     } } martians {      192.0.2.0/24 orlonger; } autonomous-system 65000; confederation 65412 members [ 65000 65001 65002 ]; [edit] lab@r1# show policy-options policy-statement direct {      term 1 {          from {              protocol direct;             route-filter 10.0.5.0/24 exact;          }          then {             metric 101;              accept;          }      } } policy-statement ibgp {      term 1 {          from {              protocol static;             route-filter 192.168.10.0/24 exact;          }         then accept;      } } policy-statement peer-filter-in {     term rfc1918 {          from {             route-filter 10.0.0.0/8 orlonger reject;             route-filter 192.168.0.0/16 orlonger reject;             route-filter 172.16.0.0/12 orlonger reject;             route-filter 0.0.0.0/0 through 0.0.0.0/32 reject;          }      }      term kill-27-or-longer {          from {             route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;          }      }      term no-comms {          then {             community delete all-comms;          }      }     term tag-p1 {         from as-path peer-1;          then {             community add peer-1;              as-path-prepend "64512 64512";          }      } } policy-statement p1-export {      term 1 {          from {             protocol static;              route-filter 10.0.0.0/8 exact;          }          then accept;      }      term 2 {         from as-path trans;          then reject;      }      term 3 {          from {             route-filter 192.168.0.0/16 prefix-length-range /24-/32;          }          then reject;      } } community all-comms members *:*; community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; as-path peer-1 ".* 1492"; as-path trans ".* 65222 .*"; 
end example

Listing 6.2: r2 EBGP-Related Configuration

start example
lab@r2# show protocols bgp group 65001 { type internal;      type internal;      local-address 10.0.6.2;     export ibgp;      neighbor 10.0.3.4; } group p1 {      type external;      import peer-filter-in;      export p1-export;     neighbor 10.0.5.254 {          peer-as 1492;      } } [edit] lab@r2#  show routing-options static {     route 10.0.200.0/24 next-hop 10.0.1.102;     route 192.168.20.0/24 reject;      route 10.0.0.0/8 {          next-hop 10.0.4.9;         qualified-next-hop 10.0.4.1 {              preference 10;          }      } } martians {      192.0.2.0/24 orlonger; } autonomous-system 65001; confederation 65412 members [ 65000 65001 65002 ]; [edit] lab@r2#  show policy-options policy-statement direct {     term 1 {          from {              protocol direct;             route-filter 10.0.5.0/24 exact;          }          then {             metric 101;              accept;          }      } } policy-statement ibgp {     term 1 {          from {              protocol static;             route-filter 192.168.20.0/24 exact;          }         then accept;      } } policy-statement peer-filter-in {     term rfc1918 {          from {             route-filter 10.0.0.0/8 orlonger reject;             route-filter 192.168.0.0/16 orlonger reject;             route-filter 172.16.0.0/12 orlonger reject;             route-filter 0.0.0.0/0 through 0.0.0.0/32 reject;          }      }      term kill-27-or-longer {          from {             route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;          }      }      term no-comms {          then {             community delete all-comms;          }      }     term tag-p1 {          from as-path peer-1;          then {             community add peer-1;             as-path-prepend "64512 64512";          }      }     term prefer-2 {         from community peer-1;          then {             local-preference 101;          }      } } policy-statement p1-export {      term 1 {          from {             protocol static;             route-filter 10.0.0.0/8 exact;          }          then accept;      }      term 2 {         from as-path trans;          then reject;      }      term 3 {          from {             route-filter 192.168.0.0/16 prefix-length-range /24-/32;          }          then reject;      } } community all-comms members *:*; community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; as-path peer-1 ".* 1492"; as-path trans ".* 65222 .*"; 
end example

Listing 6.3: r3 EBGP-Related Configuration

start example
[edit] lab@r3# show protocols bgp advertise-inactive; group 65000 {     type internal;      local-address 10.0.3.3;      hold-time 180;      export ibgp;      neighbor 10.0.6.1; } group c-bgp {      type external;     multihop;      local-address 10.0.3.3;      export ibgp;      neighbor 10.0.3.4 {          peer-as 65001;      }      neighbor 10.0.3.5 {          peer-as 65002;      } } group t1-t2 {      type external;     damping;     import [ damp transit-filter-in ];     export [ no-192-24s prepend ];     remove-private;      peer-as 65222;     multipath;      neighbor 172.16.0.14;      neighbor 172.16.0.18; } [edit] lab@r3# show interfaces fe-0/0/3 unit 0 {     family inet {         address 172.16.0.17/30;      } } [edit] lab@r3# show routing-options static {      route 10.0.200.0/24 next-hop 10.0.1.102;     route 192.168.30.0/24 reject; } martians {     192.0.2.0/24 orlonger; } aggregate {     route 10.0.4.0/22; } autonomous-system 65000; confederation 65412 members [ 65000 65001 65002 ]; [edit] lab@r3# show policy-options policy-statement summ {      term 1 {          from {              route-filter 10.0.5.0/24 exact;          }          to level 2;          then accept;      }      term 2 {          from {             protocol aggregate;              route-filter 10.0.4.0/22 exact;          }          to level 2;          then accept;      }      term 3 {          from {              route-filter 10.0.4.0/22 longer;          }          to level 2;          then reject;      } } policy-statement ibgp {      term 1 {          from {              protocol static;             route-filter 192.168.30.0/24 exact;          }         then accept;      }      term 2 {         from community [ trans-1 trans-2 ];          then {             next-hop self;          }      } } policy-statement damp {      term 1 {          from {             route-filter 200.0.0.0/16 orlonger damping none;              route-filter 0.0.0.0/0 prefix-length-range /0-/8 damping none;              route-filter 0.0.0.0/0 prefix-length-range /9-/16 damping low;             route-filter 0.0.0.0/0 prefix-length-range /17-/32 damping high;          }      } } policy-statement transit-filter-in {     term rfc1918 {          from {             route-filter 10.0.0.0/8 orlonger reject;             route-filter 192.168.0.0/16 orlonger reject;             route-filter 172.16.0.0/12 orlonger reject;          }      }      term kill-27-or-longer {          from {             route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;          }      }     term tag-t1 {          from neighbor 172.16.0.14;          then {              community add trans-1;          }      }      term tag-t2 {          from neighbor 172.16.0.18;         then {              community add trans-2;          }      } } policy-statement no-192-24s {      term 1 {         from {             route-filter 192.168.0.0/16 prefix-length-range /24-/32 reject;          }      } } policy-statement prepend {      term 1 {         from community [ cust-1 cust-2 ];         then as-path-prepend "65412 65412";      } } community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; damping none {      disable; } damping high {      half-life 25;      reuse 1500; } damping low {      half-life 20;      reuse 1000; } 
end example

Listing 6.4: r4 EBGP-Related Configuration

start example
[edit] lab@r4# show protocols bgp advertise-inactive; group 65001 {      type internal;      local-address 10.0.3.4;      export ibgp;      neighbor 10.0.6.2; } group c-bgp {      type external;     multihop;      local-address 10.0.3.4;      export ibgp;      neighbor 10.0.3.3 {          peer-as 65000;      }      neighbor 10.0.3.5 {          peer-as 65002;      } } group c1 {      type external;     multihop;      local-address 10.0.3.4;      import cust-filter-in;      family inet {          unicast {              prefix-limit {                  maximum 10;             }         }      }      export no-192-24s;      peer-as 65010;      neighbor 200.200.0.1 {         authentication-key "$9$n2/i9tOMWx7VY"; # SECRET-DATA      } } [edit] lab@r4#  show interfaces fe-0/0/0 unit 0 {      family inet {          address 172.16.0.5/30;      } } [edit] lab@r4#  show interfaces fe-0/0/2 unit 0 {      family inet {          address 172.16.0.9/30;      } } [edit] lab@r4# show routing-options static {     route 10.0.200.0/24 next-hop 10.0.1.102;     route 192.168.40.0/24 reject;     route 200.200.0.1/32 next-hop [ 172.16.0.6 172.16.0.10 ]; } martians {     192.0.2.0/24 orlonger; } aggregate {     route 10.0.4.0/22; } autonomous-system 65001; confederation 65412 members [ 65000 65001 65002 ]; [edit] lab@r4# show policy-options policy-statement summ {     term 1 {         from {             route-filter 10.0.5.0/24 exact;         }         to level 2;          then accept;      }      term 2 {          from {             protocol aggregate;             route-filter 10.0.4.0/22 exact;          }          to level 2;          then accept;      }      term 3 {          from {             route-filter 10.0.4.0/22 longer;          }          to level 2;          then reject;      } } policy-statement ibgp {      term 1 {          from {             protocol static;             route-filter 192.168.40.0/24 exact;          }          then accept;      }      term 2 {         from community [ cust-1 cust-2 ];          then {             next-hop self;          }      } } policy-statement cust-filter-in {     term rfc1918 {          from {             route-filter 10.0.0.0/8 orlonger reject;             route-filter 192.168.0.0/16 orlonger reject;             route-filter 172.16.0.0/12 orlonger reject;             route-filter 0.0.0.0/0 through 0.0.0.0/32 reject;          }      }      term kill-27-or-longer {          from {             route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;          }      }      term tag-c1 {          from as-path cust-1;          then {              community add cust-1;          }      }      term tag-c2 {          from as-path cust-2;          then {              community add cust-2;          }      }     term prefer-cust {         from as-path [ cust-1 cust-2 ];          then {              local-preference 101;             next policy;          }      }      term kill-rest {          then reject;      } } policy-statement no-192-24s {     term 1 {          from {             route-filter 192.168.0.0/16 prefix-length-range /24-/32 reject;          }      } } community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; as-path cust-1 ".* 65010"; as-path cust-2 ".* 65020"; 
end example

Listing 6.5: r5 EBGP-Related Configuration

start example
[edit] lab@r5# show routing-options static {      route 10.0.200.0/24 next-hop 10.0.1.102;     route 192.168.50.0/24 reject; } martians {     192.0.2.0/24 orlonger; } aggregate {      route 10.0.2.0/23;      route 10.0.8.0/21;      route 192.168.0.0/22;      route 172.16.40.0/29;     route 10.0.0.0/8; } autonomous-system 65002; confederation 65412 members [ 65000 65001 65002 ]; [edit] lab@r5# show policy-options policy-statement summ {      term 1 {          from {             protocol aggregate;              route-filter 10.0.2.0/23 exact;          }          to level 1;         then accept;     }      term 2 {          from {              route-filter 10.0.8.0/21 longer;          }          to level 2;          then reject;      }      term 3 {          from {              protocol aggregate;              route-filter 10.0.8.0/21 exact;              route-filter 192.168.0.0/22 exact;              route-filter 172.16.40.0/29 exact;          }         to level 2;          then accept;      } } policy-statement ibgp {      term 1 {          from {             protocol static;             route-filter 192.168.50.0/24 exact;          }          then accept;      }      term 2 {          from {              protocol aggregate;             route-filter 10.0.0.0/8 exact;              route-filter 192.168.0.0/22 exact;              route-filter 172.16.40.0/29 exact;          }          then accept;      } } community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; 
end example

Listing 6.6: r6 EBGP-Related Configuration

start example
[edit] lab@r6# show protocols bgp group 65002 { type internal;      type internal;     local-address 10.0.9.6;     export ibgp;      neighbor 10.0.9.7;      neighbor 10.0.3.5; } group t2 {      type external;     metric-out igp;     damping;     import [ damp transit-filter-in ];      export no-192-24s;     remove-private;      neighbor 172.16.0.22 {          peer-as 65222;      } } [edit] lab@r6# show interfaces fe-0/1/2 unit 0 {     family inet {         address 172.16.0.21/20;      } } [edit] lab@r6# show routing-options static {     route 10.0.200.0/24 next-hop 10.0.1.102;     route 192.168.60.0/24 reject; } martians {      192.0.2.0/24 orlonger; } router-id 10.0.9.6; autonomous-system 65002; confederation 65412 members [ 65000 65001 65002 ]; [edit] lab@r6# show policy-options policy-statement isis-ospf {      term 1 {          from {              route-filter 0.0.0.0/0 exact;          }          then accept;      } } policy-statement ospf-isis {      term 1 {          from {              route-filter 192.168.0.0/22 longer;              route-filter 172.16.40.0/29 longer;          }          then accept;      }      term 2 {          from {              route-filter 0.0.0.0/0 exact;          }          then reject;      } } policy-statement ibgp {      term 1 {          from {             protocol static;              route-filter 192.168.60.0/24 exact;          }          then accept;      }      term 2 {         from community trans-2;          then {             next-hop self;          }      } } policy-statement damp {      term 1 {          from {             route-filter 200.0.0.0/16 orlonger damping none;              route-filter 0.0.0.0/0 prefix-length-range /0-/8 damping none;              route-filter 0.0.0.0/0 prefix-length-range /9-/16 damping low;             route-filter 0.0.0.0/0 prefix-length-range /17-/32 damping high;          }      } } policy-statement transit-filter-in {     term rfc1918 {          from {             route-filter 10.0.0.0/8 orlonger reject;             route-filter 192.168.0.0/16 orlonger reject;             route-filter 172.16.0.0/12 orlonger reject;          }      }     term kill-27-or-longer {          from {             route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;          }      }     term tag-t2 {          from neighbor 172.16.0.22;          then {              community add trans-2;          }      } } policy-statement no-192-24s {      term 1 {          from {             route-filter 192.168.0.0/16 prefix-length-range /24-/32 reject;          }      } } community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; damping none {     disable; } damping high {      half-life 25;      reuse 1500; } damping low {      half-life 20;      reuse 1000; } 
end example

Listing 6.7: r7 EBGP-Related Configuration

start example
[edit] lab@r7# show protocols bgp group 65002 {      type internal;     local-address 10.0.9.7;     export ibgp;      neighbor 10.0.9.6;      neighbor 10.0.3.5; } group c2 {      type external;      import cust-filter-in;      export no-192-24s;     local-as 65413;      neighbor 172.16.0.26 {          peer-as 65020;      } } [edit] lab@r7# show interfaces fe-0/3/2 unit 0 {      family inet {         address 172.16.0.25/30;      } } [edit] lab@r7# show routing-options static {     route 10.0.200.0/24 next-hop 10.0.1.102;     route 192.168.70.0/24 reject; } martians {     192.0.2.0/24 orlonger; } router-id 10.0.9.7; autonomous-system 65002; confederation 65412 members [ 65000 65001 65002 ]; [edit] lab@r7# show policy-options policy-statement isis-ospf {      term 1 {          from {             route-filter 0.0.0.0/0 exact accept;          }      } } policy-statement ospf-isis {      term 1 {          from {              route-filter 192.168.0.0/22 longer;              route-filter 172.16.40.0/29 longer;          }          then accept;      }      term 2 {          from {             route-filter 0.0.0.0/0 exact;          }          then reject;      } } policy-statement ibgp {      term 1 {          from {              protocol static;             route-filter 192.168.70.0/24 exact;          }         then accept;      }     term 2 {         from community [ cust-1 cust-2 ];          then {             next-hop self;          }      } } policy-statement cust-filter-in {     term rfc1918 {          from {             route-filter 10.0.0.0/8 orlonger reject;             route-filter 192.168.0.0/16 orlonger reject;             route-filter 172.16.0.0/12 orlonger reject;             route-filter 0.0.0.0/0 through 0.0.0.0/32 reject;          }      }      term kill-27-or-longer {          from {             route-filter 0.0.0.0/0 prefix-length-range /27-/32 reject;          }      }      term tag-c1 {          from as-path cust-1;          then {             community add cust-1;          }      }      term tag-c2 {          from as-path cust-2;          then {             community add cust-2;          }      }     term prefer-cust {         from as-path [ cust-1 cust-2 ];          then {             local-preference 101;             next policy;          }      }      term kill-rest {          then reject;      } } policy-statement no-192-24s {     term 1 {          from {             route-filter 192.168.0.0/16 prefix-length-range /24-/32 reject;          }      } } community cust-1 members 65412:301; community cust-2 members 65412:302; community peer-1 members 65412:201; community trans-1 members 65412:101; community trans-2 members 65412:102; as-path cust-1 ".* 65010"; as-path cust-2 ".* 65020"; 
end example




JNCIP. Juniper Networks Certified Internet Professional Study Guide Exam CERT-JNCIP-M
JNCIP: Juniper Networks Certified Internet Professional Study Guide
ISBN: 0782140734
EAN: 2147483647
Year: 2003
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net